00:00:00.001 Started by upstream project "autotest-per-patch" build number 126142 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.076 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.077 The recommended git tool is: git 00:00:00.077 using credential 00000000-0000-0000-0000-000000000002 00:00:00.080 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.121 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.155 Using shallow fetch with depth 1 00:00:00.155 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.155 > git --version # timeout=10 00:00:00.192 > git --version # 'git version 2.39.2' 00:00:00.192 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.221 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.221 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.067 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.078 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.089 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:07.089 > git config core.sparsecheckout # timeout=10 00:00:07.099 > git read-tree -mu HEAD # timeout=10 00:00:07.116 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:07.137 Commit message: "inventory: add WCP3 to free inventory" 00:00:07.137 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:07.247 [Pipeline] Start of Pipeline 00:00:07.262 [Pipeline] library 00:00:07.263 Loading library shm_lib@master 00:00:07.263 Library shm_lib@master is cached. Copying from home. 00:00:07.277 [Pipeline] node 00:00:07.296 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.297 [Pipeline] { 00:00:07.305 [Pipeline] catchError 00:00:07.306 [Pipeline] { 00:00:07.315 [Pipeline] wrap 00:00:07.321 [Pipeline] { 00:00:07.326 [Pipeline] stage 00:00:07.327 [Pipeline] { (Prologue) 00:00:07.521 [Pipeline] sh 00:00:08.413 + logger -p user.info -t JENKINS-CI 00:00:08.437 [Pipeline] echo 00:00:08.439 Node: WFP8 00:00:08.447 [Pipeline] sh 00:00:08.790 [Pipeline] setCustomBuildProperty 00:00:08.804 [Pipeline] echo 00:00:08.806 Cleanup processes 00:00:08.811 [Pipeline] sh 00:00:09.104 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.104 9272 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.119 [Pipeline] sh 00:00:09.412 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.413 ++ grep -v 'sudo pgrep' 00:00:09.413 ++ awk '{print $1}' 00:00:09.413 + sudo kill -9 00:00:09.413 + true 00:00:09.428 [Pipeline] cleanWs 00:00:09.437 [WS-CLEANUP] Deleting project workspace... 00:00:09.437 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.449 [WS-CLEANUP] done 00:00:09.454 [Pipeline] setCustomBuildProperty 00:00:09.467 [Pipeline] sh 00:00:09.753 + sudo git config --global --replace-all safe.directory '*' 00:00:09.842 [Pipeline] httpRequest 00:00:11.282 [Pipeline] echo 00:00:11.284 Sorcerer 10.211.164.101 is alive 00:00:11.294 [Pipeline] httpRequest 00:00:11.300 HttpMethod: GET 00:00:11.300 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.301 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.320 Response Code: HTTP/1.1 200 OK 00:00:11.320 Success: Status code 200 is in the accepted range: 200,404 00:00:11.321 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:19.430 [Pipeline] sh 00:00:19.726 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:19.747 [Pipeline] httpRequest 00:00:19.780 [Pipeline] echo 00:00:19.782 Sorcerer 10.211.164.101 is alive 00:00:19.791 [Pipeline] httpRequest 00:00:19.797 HttpMethod: GET 00:00:19.797 URL: http://10.211.164.101/packages/spdk_5f33ec93a56f491b73e8a0c4698fc977bdc1e033.tar.gz 00:00:19.799 Sending request to url: http://10.211.164.101/packages/spdk_5f33ec93a56f491b73e8a0c4698fc977bdc1e033.tar.gz 00:00:19.820 Response Code: HTTP/1.1 200 OK 00:00:19.820 Success: Status code 200 is in the accepted range: 200,404 00:00:19.821 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5f33ec93a56f491b73e8a0c4698fc977bdc1e033.tar.gz 00:00:52.120 [Pipeline] sh 00:00:52.416 + tar --no-same-owner -xf spdk_5f33ec93a56f491b73e8a0c4698fc977bdc1e033.tar.gz 00:00:54.978 [Pipeline] sh 00:00:55.270 + git -C spdk log --oneline -n5 00:00:55.270 5f33ec93a util: add spdk_read_sysfs_attribute 00:00:55.270 312b24912 doc: fix deprecation.md typo 00:00:55.270 719d03c6a sock/uring: only register net impl if supported 00:00:55.270 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:55.270 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:55.284 [Pipeline] } 00:00:55.302 [Pipeline] // stage 00:00:55.312 [Pipeline] stage 00:00:55.314 [Pipeline] { (Prepare) 00:00:55.334 [Pipeline] writeFile 00:00:55.348 [Pipeline] sh 00:00:55.633 + logger -p user.info -t JENKINS-CI 00:00:55.648 [Pipeline] sh 00:00:55.937 + logger -p user.info -t JENKINS-CI 00:00:55.951 [Pipeline] sh 00:00:56.243 + cat autorun-spdk.conf 00:00:56.243 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.243 SPDK_TEST_NVMF=1 00:00:56.243 SPDK_TEST_NVME_CLI=1 00:00:56.243 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.243 SPDK_TEST_NVMF_NICS=e810 00:00:56.243 SPDK_TEST_VFIOUSER=1 00:00:56.243 SPDK_RUN_UBSAN=1 00:00:56.243 NET_TYPE=phy 00:00:56.251 RUN_NIGHTLY=0 00:00:56.257 [Pipeline] readFile 00:00:56.310 [Pipeline] withEnv 00:00:56.313 [Pipeline] { 00:00:56.328 [Pipeline] sh 00:00:56.620 + set -ex 00:00:56.620 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:56.620 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.620 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.620 ++ SPDK_TEST_NVMF=1 00:00:56.620 ++ SPDK_TEST_NVME_CLI=1 00:00:56.620 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.620 ++ SPDK_TEST_NVMF_NICS=e810 00:00:56.620 ++ SPDK_TEST_VFIOUSER=1 00:00:56.620 ++ SPDK_RUN_UBSAN=1 00:00:56.620 ++ NET_TYPE=phy 00:00:56.620 ++ RUN_NIGHTLY=0 00:00:56.620 + case $SPDK_TEST_NVMF_NICS in 00:00:56.620 + DRIVERS=ice 00:00:56.620 + [[ tcp == \r\d\m\a ]] 00:00:56.620 + [[ -n ice ]] 00:00:56.620 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:56.620 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:59.922 rmmod: ERROR: Module irdma is not currently loaded 00:00:59.923 rmmod: ERROR: Module i40iw is not currently loaded 00:00:59.923 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:59.923 + true 00:00:59.923 + for D in $DRIVERS 00:00:59.923 + sudo modprobe ice 00:00:59.923 + exit 0 00:00:59.933 [Pipeline] } 00:00:59.952 [Pipeline] // withEnv 00:00:59.958 [Pipeline] } 00:00:59.975 [Pipeline] // stage 00:00:59.986 [Pipeline] catchError 00:00:59.988 [Pipeline] { 00:01:00.004 [Pipeline] timeout 00:01:00.004 Timeout set to expire in 50 min 00:01:00.006 [Pipeline] { 00:01:00.023 [Pipeline] stage 00:01:00.026 [Pipeline] { (Tests) 00:01:00.043 [Pipeline] sh 00:01:00.334 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.334 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.334 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.334 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:00.334 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.334 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.334 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:00.334 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.334 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.334 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.334 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:00.334 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.334 + source /etc/os-release 00:01:00.334 ++ NAME='Fedora Linux' 00:01:00.334 ++ VERSION='38 (Cloud Edition)' 00:01:00.334 ++ ID=fedora 00:01:00.334 ++ VERSION_ID=38 00:01:00.334 ++ VERSION_CODENAME= 00:01:00.334 ++ PLATFORM_ID=platform:f38 00:01:00.334 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:00.334 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:00.334 ++ LOGO=fedora-logo-icon 00:01:00.334 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:00.334 ++ HOME_URL=https://fedoraproject.org/ 00:01:00.334 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:00.334 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:00.334 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:00.334 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:00.334 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:00.334 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:00.334 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:00.334 ++ SUPPORT_END=2024-05-14 00:01:00.334 ++ VARIANT='Cloud Edition' 00:01:00.334 ++ VARIANT_ID=cloud 00:01:00.334 + uname -a 00:01:00.334 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:00.334 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:02.876 Hugepages 00:01:02.876 node hugesize free / total 00:01:02.876 node0 1048576kB 0 / 0 00:01:02.876 node0 2048kB 0 / 0 00:01:02.876 node1 1048576kB 0 / 0 00:01:02.876 node1 2048kB 0 / 0 00:01:02.876 00:01:02.876 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:02.876 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:02.876 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:02.876 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:02.876 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:02.876 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:02.876 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:02.876 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:02.876 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:02.876 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:02.876 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:02.876 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:02.876 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:02.876 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:02.876 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:02.876 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:02.876 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:02.876 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:02.876 + rm -f /tmp/spdk-ld-path 00:01:02.876 + source autorun-spdk.conf 00:01:02.876 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.876 ++ SPDK_TEST_NVMF=1 00:01:02.876 ++ SPDK_TEST_NVME_CLI=1 00:01:02.876 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.876 ++ SPDK_TEST_NVMF_NICS=e810 00:01:02.876 ++ SPDK_TEST_VFIOUSER=1 00:01:02.876 ++ SPDK_RUN_UBSAN=1 00:01:02.876 ++ NET_TYPE=phy 00:01:02.876 ++ RUN_NIGHTLY=0 00:01:02.876 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:02.876 + [[ -n '' ]] 00:01:02.876 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.876 + for M in /var/spdk/build-*-manifest.txt 00:01:02.876 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:02.876 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:02.876 + for M in /var/spdk/build-*-manifest.txt 00:01:02.876 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:02.876 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:02.876 ++ uname 00:01:02.876 + [[ Linux == \L\i\n\u\x ]] 00:01:02.876 + sudo dmesg -T 00:01:02.876 + sudo dmesg --clear 00:01:02.876 + dmesg_pid=10180 00:01:02.876 + [[ Fedora Linux == FreeBSD ]] 00:01:02.876 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:02.876 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:02.876 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:02.876 + sudo dmesg -Tw 00:01:02.876 + [[ -x /usr/src/fio-static/fio ]] 00:01:02.876 + export FIO_BIN=/usr/src/fio-static/fio 00:01:02.876 + FIO_BIN=/usr/src/fio-static/fio 00:01:02.876 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:02.876 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:02.876 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:02.876 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:02.876 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:02.876 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:02.876 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:02.876 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:02.876 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.876 Test configuration: 00:01:02.876 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.876 SPDK_TEST_NVMF=1 00:01:02.876 SPDK_TEST_NVME_CLI=1 00:01:02.876 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.876 SPDK_TEST_NVMF_NICS=e810 00:01:02.876 SPDK_TEST_VFIOUSER=1 00:01:02.876 SPDK_RUN_UBSAN=1 00:01:02.876 NET_TYPE=phy 00:01:02.876 RUN_NIGHTLY=0 18:53:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:02.876 18:53:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:02.876 18:53:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:02.876 18:53:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:02.876 18:53:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.876 18:53:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.876 18:53:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.876 18:53:05 -- paths/export.sh@5 -- $ export PATH 00:01:02.876 18:53:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.876 18:53:05 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:02.876 18:53:05 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:03.137 18:53:05 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720803185.XXXXXX 00:01:03.137 18:53:05 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720803185.8Ffgmp 00:01:03.137 18:53:05 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:03.137 18:53:05 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:03.137 18:53:05 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:03.137 18:53:05 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:03.137 18:53:05 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:03.137 18:53:05 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:03.137 18:53:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:03.137 18:53:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.137 18:53:05 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:03.137 18:53:05 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:03.137 18:53:05 -- pm/common@17 -- $ local monitor 00:01:03.137 18:53:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.137 18:53:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.137 18:53:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.137 18:53:05 -- pm/common@21 -- $ date +%s 00:01:03.137 18:53:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.137 18:53:05 -- pm/common@21 -- $ date +%s 00:01:03.137 18:53:05 -- pm/common@25 -- $ sleep 1 00:01:03.137 18:53:05 -- pm/common@21 -- $ date +%s 00:01:03.137 18:53:05 -- pm/common@21 -- $ date +%s 00:01:03.137 18:53:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720803185 00:01:03.137 18:53:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720803185 00:01:03.137 18:53:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720803185 00:01:03.137 18:53:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720803185 00:01:03.137 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720803185_collect-vmstat.pm.log 00:01:03.137 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720803185_collect-cpu-load.pm.log 00:01:03.137 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720803185_collect-cpu-temp.pm.log 00:01:03.137 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720803185_collect-bmc-pm.bmc.pm.log 00:01:04.079 18:53:06 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:04.079 18:53:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:04.079 18:53:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:04.079 18:53:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.079 18:53:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:04.079 Fri Jul 12 04:53:06 PM UTC 2024 00:01:04.079 18:53:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:04.079 v24.09-pre-204-g5f33ec93a 00:01:04.079 18:53:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:04.079 18:53:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:04.079 18:53:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:04.079 18:53:06 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:04.079 18:53:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:04.079 18:53:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:04.079 ************************************ 00:01:04.079 START TEST ubsan 00:01:04.079 ************************************ 00:01:04.079 18:53:06 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:04.079 using ubsan 00:01:04.079 00:01:04.079 real 0m0.000s 00:01:04.079 user 0m0.000s 00:01:04.079 sys 0m0.000s 00:01:04.079 18:53:06 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:04.079 18:53:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:04.079 ************************************ 00:01:04.079 END TEST ubsan 00:01:04.079 ************************************ 00:01:04.079 18:53:06 -- common/autotest_common.sh@1142 -- $ return 0 00:01:04.079 18:53:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:04.079 18:53:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:04.079 18:53:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:04.079 18:53:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:04.079 18:53:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:04.079 18:53:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:04.079 18:53:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:04.079 18:53:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:04.079 18:53:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:04.649 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:04.649 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:05.588 Using 'verbs' RDMA provider 00:01:21.432 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:33.677 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:33.677 Creating mk/config.mk...done. 00:01:33.677 Creating mk/cc.flags.mk...done. 00:01:33.677 Type 'make' to build. 00:01:33.677 18:53:34 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:33.677 18:53:34 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:33.677 18:53:34 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:33.677 18:53:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.677 ************************************ 00:01:33.677 START TEST make 00:01:33.677 ************************************ 00:01:33.677 18:53:34 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:33.677 make[1]: Nothing to be done for 'all'. 00:01:34.620 The Meson build system 00:01:34.620 Version: 1.3.1 00:01:34.620 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:34.620 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:34.620 Build type: native build 00:01:34.620 Project name: libvfio-user 00:01:34.620 Project version: 0.0.1 00:01:34.620 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:34.620 C linker for the host machine: cc ld.bfd 2.39-16 00:01:34.620 Host machine cpu family: x86_64 00:01:34.620 Host machine cpu: x86_64 00:01:34.620 Run-time dependency threads found: YES 00:01:34.620 Library dl found: YES 00:01:34.620 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:34.620 Run-time dependency json-c found: YES 0.17 00:01:34.620 Run-time dependency cmocka found: YES 1.1.7 00:01:34.620 Program pytest-3 found: NO 00:01:34.620 Program flake8 found: NO 00:01:34.620 Program misspell-fixer found: NO 00:01:34.620 Program restructuredtext-lint found: NO 00:01:34.620 Program valgrind found: YES (/usr/bin/valgrind) 00:01:34.620 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:34.620 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:34.620 Compiler for C supports arguments -Wwrite-strings: YES 00:01:34.620 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:34.620 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:34.620 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:34.620 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:34.620 Build targets in project: 8 00:01:34.620 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:34.620 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:34.620 00:01:34.620 libvfio-user 0.0.1 00:01:34.620 00:01:34.620 User defined options 00:01:34.620 buildtype : debug 00:01:34.620 default_library: shared 00:01:34.620 libdir : /usr/local/lib 00:01:34.620 00:01:34.620 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:34.908 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:34.908 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:34.908 [2/37] Compiling C object samples/null.p/null.c.o 00:01:34.908 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:34.908 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:34.908 [5/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:34.908 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:34.908 [7/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:34.908 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:34.908 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:34.908 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:34.908 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:34.908 [12/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:34.908 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:34.908 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:34.908 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:34.908 [16/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:34.908 [17/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:34.908 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:34.908 [19/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:34.908 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:34.909 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:34.909 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:34.909 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:34.909 [24/37] Compiling C object samples/server.p/server.c.o 00:01:34.909 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:34.909 [26/37] Compiling C object samples/client.p/client.c.o 00:01:35.168 [27/37] Linking target samples/client 00:01:35.168 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:35.168 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:35.168 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:35.168 [31/37] Linking target test/unit_tests 00:01:35.168 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:35.427 [33/37] Linking target samples/server 00:01:35.427 [34/37] Linking target samples/lspci 00:01:35.427 [35/37] Linking target samples/null 00:01:35.427 [36/37] Linking target samples/gpio-pci-idio-16 00:01:35.427 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:35.427 INFO: autodetecting backend as ninja 00:01:35.427 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:35.427 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:35.688 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:35.688 ninja: no work to do. 00:01:40.978 The Meson build system 00:01:40.978 Version: 1.3.1 00:01:40.978 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:40.978 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:40.978 Build type: native build 00:01:40.978 Program cat found: YES (/usr/bin/cat) 00:01:40.978 Project name: DPDK 00:01:40.978 Project version: 24.03.0 00:01:40.978 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:40.978 C linker for the host machine: cc ld.bfd 2.39-16 00:01:40.978 Host machine cpu family: x86_64 00:01:40.978 Host machine cpu: x86_64 00:01:40.978 Message: ## Building in Developer Mode ## 00:01:40.978 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:40.978 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:40.978 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:40.978 Program python3 found: YES (/usr/bin/python3) 00:01:40.978 Program cat found: YES (/usr/bin/cat) 00:01:40.978 Compiler for C supports arguments -march=native: YES 00:01:40.978 Checking for size of "void *" : 8 00:01:40.978 Checking for size of "void *" : 8 (cached) 00:01:40.978 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:40.978 Library m found: YES 00:01:40.978 Library numa found: YES 00:01:40.978 Has header "numaif.h" : YES 00:01:40.978 Library fdt found: NO 00:01:40.978 Library execinfo found: NO 00:01:40.978 Has header "execinfo.h" : YES 00:01:40.979 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:40.979 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:40.979 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:40.979 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:40.979 Run-time dependency openssl found: YES 3.0.9 00:01:40.979 Run-time dependency libpcap found: YES 1.10.4 00:01:40.979 Has header "pcap.h" with dependency libpcap: YES 00:01:40.979 Compiler for C supports arguments -Wcast-qual: YES 00:01:40.979 Compiler for C supports arguments -Wdeprecated: YES 00:01:40.979 Compiler for C supports arguments -Wformat: YES 00:01:40.979 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:40.979 Compiler for C supports arguments -Wformat-security: NO 00:01:40.979 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:40.979 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:40.979 Compiler for C supports arguments -Wnested-externs: YES 00:01:40.979 Compiler for C supports arguments -Wold-style-definition: YES 00:01:40.979 Compiler for C supports arguments -Wpointer-arith: YES 00:01:40.979 Compiler for C supports arguments -Wsign-compare: YES 00:01:40.979 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:40.979 Compiler for C supports arguments -Wundef: YES 00:01:40.979 Compiler for C supports arguments -Wwrite-strings: YES 00:01:40.979 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:40.979 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:40.979 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:40.979 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:40.979 Program objdump found: YES (/usr/bin/objdump) 00:01:40.979 Compiler for C supports arguments -mavx512f: YES 00:01:40.979 Checking if "AVX512 checking" compiles: YES 00:01:40.979 Fetching value of define "__SSE4_2__" : 1 00:01:40.979 Fetching value of define "__AES__" : 1 00:01:40.979 Fetching value of define "__AVX__" : 1 00:01:40.979 Fetching value of define "__AVX2__" : 1 00:01:40.979 Fetching value of define "__AVX512BW__" : 1 00:01:40.979 Fetching value of define "__AVX512CD__" : 1 00:01:40.979 Fetching value of define "__AVX512DQ__" : 1 00:01:40.979 Fetching value of define "__AVX512F__" : 1 00:01:40.979 Fetching value of define "__AVX512VL__" : 1 00:01:40.979 Fetching value of define "__PCLMUL__" : 1 00:01:40.979 Fetching value of define "__RDRND__" : 1 00:01:40.979 Fetching value of define "__RDSEED__" : 1 00:01:40.979 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:40.979 Fetching value of define "__znver1__" : (undefined) 00:01:40.979 Fetching value of define "__znver2__" : (undefined) 00:01:40.979 Fetching value of define "__znver3__" : (undefined) 00:01:40.979 Fetching value of define "__znver4__" : (undefined) 00:01:40.979 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:40.979 Message: lib/log: Defining dependency "log" 00:01:40.979 Message: lib/kvargs: Defining dependency "kvargs" 00:01:40.979 Message: lib/telemetry: Defining dependency "telemetry" 00:01:40.979 Checking for function "getentropy" : NO 00:01:40.979 Message: lib/eal: Defining dependency "eal" 00:01:40.979 Message: lib/ring: Defining dependency "ring" 00:01:40.979 Message: lib/rcu: Defining dependency "rcu" 00:01:40.979 Message: lib/mempool: Defining dependency "mempool" 00:01:40.979 Message: lib/mbuf: Defining dependency "mbuf" 00:01:40.979 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:40.979 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:40.979 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:40.979 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:40.979 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:40.979 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:40.979 Compiler for C supports arguments -mpclmul: YES 00:01:40.979 Compiler for C supports arguments -maes: YES 00:01:40.979 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:40.979 Compiler for C supports arguments -mavx512bw: YES 00:01:40.979 Compiler for C supports arguments -mavx512dq: YES 00:01:40.979 Compiler for C supports arguments -mavx512vl: YES 00:01:40.979 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:40.979 Compiler for C supports arguments -mavx2: YES 00:01:40.979 Compiler for C supports arguments -mavx: YES 00:01:40.979 Message: lib/net: Defining dependency "net" 00:01:40.979 Message: lib/meter: Defining dependency "meter" 00:01:40.979 Message: lib/ethdev: Defining dependency "ethdev" 00:01:40.979 Message: lib/pci: Defining dependency "pci" 00:01:40.979 Message: lib/cmdline: Defining dependency "cmdline" 00:01:40.979 Message: lib/hash: Defining dependency "hash" 00:01:40.979 Message: lib/timer: Defining dependency "timer" 00:01:40.979 Message: lib/compressdev: Defining dependency "compressdev" 00:01:40.979 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:40.979 Message: lib/dmadev: Defining dependency "dmadev" 00:01:40.979 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:40.979 Message: lib/power: Defining dependency "power" 00:01:40.979 Message: lib/reorder: Defining dependency "reorder" 00:01:40.979 Message: lib/security: Defining dependency "security" 00:01:40.979 Has header "linux/userfaultfd.h" : YES 00:01:40.979 Has header "linux/vduse.h" : YES 00:01:40.979 Message: lib/vhost: Defining dependency "vhost" 00:01:40.979 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:40.979 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:40.979 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:40.979 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:40.979 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:40.979 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:40.979 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:40.979 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:40.979 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:40.979 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:40.979 Program doxygen found: YES (/usr/bin/doxygen) 00:01:40.979 Configuring doxy-api-html.conf using configuration 00:01:40.979 Configuring doxy-api-man.conf using configuration 00:01:40.979 Program mandb found: YES (/usr/bin/mandb) 00:01:40.979 Program sphinx-build found: NO 00:01:40.979 Configuring rte_build_config.h using configuration 00:01:40.979 Message: 00:01:40.979 ================= 00:01:40.979 Applications Enabled 00:01:40.979 ================= 00:01:40.979 00:01:40.979 apps: 00:01:40.979 00:01:40.979 00:01:40.979 Message: 00:01:40.979 ================= 00:01:40.979 Libraries Enabled 00:01:40.979 ================= 00:01:40.979 00:01:40.979 libs: 00:01:40.979 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:40.979 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:40.979 cryptodev, dmadev, power, reorder, security, vhost, 00:01:40.979 00:01:40.979 Message: 00:01:40.979 =============== 00:01:40.979 Drivers Enabled 00:01:40.979 =============== 00:01:40.979 00:01:40.979 common: 00:01:40.979 00:01:40.979 bus: 00:01:40.979 pci, vdev, 00:01:40.979 mempool: 00:01:40.979 ring, 00:01:40.979 dma: 00:01:40.979 00:01:40.979 net: 00:01:40.979 00:01:40.979 crypto: 00:01:40.979 00:01:40.979 compress: 00:01:40.979 00:01:40.979 vdpa: 00:01:40.979 00:01:40.979 00:01:40.979 Message: 00:01:40.979 ================= 00:01:40.979 Content Skipped 00:01:40.979 ================= 00:01:40.979 00:01:40.979 apps: 00:01:40.979 dumpcap: explicitly disabled via build config 00:01:40.979 graph: explicitly disabled via build config 00:01:40.979 pdump: explicitly disabled via build config 00:01:40.979 proc-info: explicitly disabled via build config 00:01:40.979 test-acl: explicitly disabled via build config 00:01:40.979 test-bbdev: explicitly disabled via build config 00:01:40.979 test-cmdline: explicitly disabled via build config 00:01:40.980 test-compress-perf: explicitly disabled via build config 00:01:40.980 test-crypto-perf: explicitly disabled via build config 00:01:40.980 test-dma-perf: explicitly disabled via build config 00:01:40.980 test-eventdev: explicitly disabled via build config 00:01:40.980 test-fib: explicitly disabled via build config 00:01:40.980 test-flow-perf: explicitly disabled via build config 00:01:40.980 test-gpudev: explicitly disabled via build config 00:01:40.980 test-mldev: explicitly disabled via build config 00:01:40.980 test-pipeline: explicitly disabled via build config 00:01:40.980 test-pmd: explicitly disabled via build config 00:01:40.980 test-regex: explicitly disabled via build config 00:01:40.980 test-sad: explicitly disabled via build config 00:01:40.980 test-security-perf: explicitly disabled via build config 00:01:40.980 00:01:40.980 libs: 00:01:40.980 argparse: explicitly disabled via build config 00:01:40.980 metrics: explicitly disabled via build config 00:01:40.980 acl: explicitly disabled via build config 00:01:40.980 bbdev: explicitly disabled via build config 00:01:40.980 bitratestats: explicitly disabled via build config 00:01:40.980 bpf: explicitly disabled via build config 00:01:40.980 cfgfile: explicitly disabled via build config 00:01:40.980 distributor: explicitly disabled via build config 00:01:40.980 efd: explicitly disabled via build config 00:01:40.980 eventdev: explicitly disabled via build config 00:01:40.980 dispatcher: explicitly disabled via build config 00:01:40.980 gpudev: explicitly disabled via build config 00:01:40.980 gro: explicitly disabled via build config 00:01:40.980 gso: explicitly disabled via build config 00:01:40.980 ip_frag: explicitly disabled via build config 00:01:40.980 jobstats: explicitly disabled via build config 00:01:40.980 latencystats: explicitly disabled via build config 00:01:40.980 lpm: explicitly disabled via build config 00:01:40.980 member: explicitly disabled via build config 00:01:40.980 pcapng: explicitly disabled via build config 00:01:40.980 rawdev: explicitly disabled via build config 00:01:40.980 regexdev: explicitly disabled via build config 00:01:40.980 mldev: explicitly disabled via build config 00:01:40.980 rib: explicitly disabled via build config 00:01:40.980 sched: explicitly disabled via build config 00:01:40.980 stack: explicitly disabled via build config 00:01:40.980 ipsec: explicitly disabled via build config 00:01:40.980 pdcp: explicitly disabled via build config 00:01:40.980 fib: explicitly disabled via build config 00:01:40.980 port: explicitly disabled via build config 00:01:40.980 pdump: explicitly disabled via build config 00:01:40.980 table: explicitly disabled via build config 00:01:40.980 pipeline: explicitly disabled via build config 00:01:40.980 graph: explicitly disabled via build config 00:01:40.980 node: explicitly disabled via build config 00:01:40.980 00:01:40.980 drivers: 00:01:40.980 common/cpt: not in enabled drivers build config 00:01:40.980 common/dpaax: not in enabled drivers build config 00:01:40.980 common/iavf: not in enabled drivers build config 00:01:40.980 common/idpf: not in enabled drivers build config 00:01:40.980 common/ionic: not in enabled drivers build config 00:01:40.980 common/mvep: not in enabled drivers build config 00:01:40.980 common/octeontx: not in enabled drivers build config 00:01:40.980 bus/auxiliary: not in enabled drivers build config 00:01:40.980 bus/cdx: not in enabled drivers build config 00:01:40.980 bus/dpaa: not in enabled drivers build config 00:01:40.980 bus/fslmc: not in enabled drivers build config 00:01:40.980 bus/ifpga: not in enabled drivers build config 00:01:40.980 bus/platform: not in enabled drivers build config 00:01:40.980 bus/uacce: not in enabled drivers build config 00:01:40.980 bus/vmbus: not in enabled drivers build config 00:01:40.980 common/cnxk: not in enabled drivers build config 00:01:40.980 common/mlx5: not in enabled drivers build config 00:01:40.980 common/nfp: not in enabled drivers build config 00:01:40.980 common/nitrox: not in enabled drivers build config 00:01:40.980 common/qat: not in enabled drivers build config 00:01:40.980 common/sfc_efx: not in enabled drivers build config 00:01:40.980 mempool/bucket: not in enabled drivers build config 00:01:40.980 mempool/cnxk: not in enabled drivers build config 00:01:40.980 mempool/dpaa: not in enabled drivers build config 00:01:40.980 mempool/dpaa2: not in enabled drivers build config 00:01:40.980 mempool/octeontx: not in enabled drivers build config 00:01:40.980 mempool/stack: not in enabled drivers build config 00:01:40.980 dma/cnxk: not in enabled drivers build config 00:01:40.980 dma/dpaa: not in enabled drivers build config 00:01:40.980 dma/dpaa2: not in enabled drivers build config 00:01:40.980 dma/hisilicon: not in enabled drivers build config 00:01:40.980 dma/idxd: not in enabled drivers build config 00:01:40.980 dma/ioat: not in enabled drivers build config 00:01:40.980 dma/skeleton: not in enabled drivers build config 00:01:40.980 net/af_packet: not in enabled drivers build config 00:01:40.980 net/af_xdp: not in enabled drivers build config 00:01:40.980 net/ark: not in enabled drivers build config 00:01:40.980 net/atlantic: not in enabled drivers build config 00:01:40.980 net/avp: not in enabled drivers build config 00:01:40.980 net/axgbe: not in enabled drivers build config 00:01:40.980 net/bnx2x: not in enabled drivers build config 00:01:40.980 net/bnxt: not in enabled drivers build config 00:01:40.980 net/bonding: not in enabled drivers build config 00:01:40.980 net/cnxk: not in enabled drivers build config 00:01:40.980 net/cpfl: not in enabled drivers build config 00:01:40.980 net/cxgbe: not in enabled drivers build config 00:01:40.980 net/dpaa: not in enabled drivers build config 00:01:40.980 net/dpaa2: not in enabled drivers build config 00:01:40.980 net/e1000: not in enabled drivers build config 00:01:40.980 net/ena: not in enabled drivers build config 00:01:40.980 net/enetc: not in enabled drivers build config 00:01:40.980 net/enetfec: not in enabled drivers build config 00:01:40.980 net/enic: not in enabled drivers build config 00:01:40.980 net/failsafe: not in enabled drivers build config 00:01:40.980 net/fm10k: not in enabled drivers build config 00:01:40.980 net/gve: not in enabled drivers build config 00:01:40.980 net/hinic: not in enabled drivers build config 00:01:40.980 net/hns3: not in enabled drivers build config 00:01:40.980 net/i40e: not in enabled drivers build config 00:01:40.980 net/iavf: not in enabled drivers build config 00:01:40.980 net/ice: not in enabled drivers build config 00:01:40.980 net/idpf: not in enabled drivers build config 00:01:40.980 net/igc: not in enabled drivers build config 00:01:40.980 net/ionic: not in enabled drivers build config 00:01:40.980 net/ipn3ke: not in enabled drivers build config 00:01:40.980 net/ixgbe: not in enabled drivers build config 00:01:40.980 net/mana: not in enabled drivers build config 00:01:40.980 net/memif: not in enabled drivers build config 00:01:40.980 net/mlx4: not in enabled drivers build config 00:01:40.980 net/mlx5: not in enabled drivers build config 00:01:40.980 net/mvneta: not in enabled drivers build config 00:01:40.980 net/mvpp2: not in enabled drivers build config 00:01:40.980 net/netvsc: not in enabled drivers build config 00:01:40.980 net/nfb: not in enabled drivers build config 00:01:40.980 net/nfp: not in enabled drivers build config 00:01:40.980 net/ngbe: not in enabled drivers build config 00:01:40.980 net/null: not in enabled drivers build config 00:01:40.980 net/octeontx: not in enabled drivers build config 00:01:40.980 net/octeon_ep: not in enabled drivers build config 00:01:40.980 net/pcap: not in enabled drivers build config 00:01:40.980 net/pfe: not in enabled drivers build config 00:01:40.980 net/qede: not in enabled drivers build config 00:01:40.980 net/ring: not in enabled drivers build config 00:01:40.980 net/sfc: not in enabled drivers build config 00:01:40.980 net/softnic: not in enabled drivers build config 00:01:40.980 net/tap: not in enabled drivers build config 00:01:40.980 net/thunderx: not in enabled drivers build config 00:01:40.980 net/txgbe: not in enabled drivers build config 00:01:40.980 net/vdev_netvsc: not in enabled drivers build config 00:01:40.980 net/vhost: not in enabled drivers build config 00:01:40.980 net/virtio: not in enabled drivers build config 00:01:40.981 net/vmxnet3: not in enabled drivers build config 00:01:40.981 raw/*: missing internal dependency, "rawdev" 00:01:40.981 crypto/armv8: not in enabled drivers build config 00:01:40.981 crypto/bcmfs: not in enabled drivers build config 00:01:40.981 crypto/caam_jr: not in enabled drivers build config 00:01:40.981 crypto/ccp: not in enabled drivers build config 00:01:40.981 crypto/cnxk: not in enabled drivers build config 00:01:40.981 crypto/dpaa_sec: not in enabled drivers build config 00:01:40.981 crypto/dpaa2_sec: not in enabled drivers build config 00:01:40.981 crypto/ipsec_mb: not in enabled drivers build config 00:01:40.981 crypto/mlx5: not in enabled drivers build config 00:01:40.981 crypto/mvsam: not in enabled drivers build config 00:01:40.981 crypto/nitrox: not in enabled drivers build config 00:01:40.981 crypto/null: not in enabled drivers build config 00:01:40.981 crypto/octeontx: not in enabled drivers build config 00:01:40.981 crypto/openssl: not in enabled drivers build config 00:01:40.981 crypto/scheduler: not in enabled drivers build config 00:01:40.981 crypto/uadk: not in enabled drivers build config 00:01:40.981 crypto/virtio: not in enabled drivers build config 00:01:40.981 compress/isal: not in enabled drivers build config 00:01:40.981 compress/mlx5: not in enabled drivers build config 00:01:40.981 compress/nitrox: not in enabled drivers build config 00:01:40.981 compress/octeontx: not in enabled drivers build config 00:01:40.981 compress/zlib: not in enabled drivers build config 00:01:40.981 regex/*: missing internal dependency, "regexdev" 00:01:40.981 ml/*: missing internal dependency, "mldev" 00:01:40.981 vdpa/ifc: not in enabled drivers build config 00:01:40.981 vdpa/mlx5: not in enabled drivers build config 00:01:40.981 vdpa/nfp: not in enabled drivers build config 00:01:40.981 vdpa/sfc: not in enabled drivers build config 00:01:40.981 event/*: missing internal dependency, "eventdev" 00:01:40.981 baseband/*: missing internal dependency, "bbdev" 00:01:40.981 gpu/*: missing internal dependency, "gpudev" 00:01:40.981 00:01:40.981 00:01:40.981 Build targets in project: 85 00:01:40.981 00:01:40.981 DPDK 24.03.0 00:01:40.981 00:01:40.981 User defined options 00:01:40.981 buildtype : debug 00:01:40.981 default_library : shared 00:01:40.981 libdir : lib 00:01:40.981 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:40.981 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:40.981 c_link_args : 00:01:40.981 cpu_instruction_set: native 00:01:40.981 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:40.981 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:40.981 enable_docs : false 00:01:40.981 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:40.981 enable_kmods : false 00:01:40.981 max_lcores : 128 00:01:40.981 tests : false 00:01:40.981 00:01:40.981 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:40.981 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:40.981 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:40.981 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:40.981 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:40.981 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:40.981 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:40.981 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:40.981 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:40.981 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:40.981 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:40.981 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:40.981 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:40.981 [12/268] Linking static target lib/librte_kvargs.a 00:01:40.981 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:40.981 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:41.247 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:41.247 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:41.247 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:41.247 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:41.247 [19/268] Linking static target lib/librte_log.a 00:01:41.247 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:41.247 [21/268] Linking static target lib/librte_pci.a 00:01:41.247 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:41.247 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:41.247 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:41.511 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:41.511 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:41.511 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:41.511 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:41.511 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:41.511 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:41.511 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:41.511 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:41.511 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:41.511 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:41.511 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:41.511 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:41.511 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:41.511 [38/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:41.511 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:41.511 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:41.511 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:41.511 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:41.511 [43/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:41.511 [44/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.511 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:41.511 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:41.511 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:41.511 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:41.511 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:41.511 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:41.511 [51/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:41.511 [52/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:41.511 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:41.511 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:41.511 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:41.511 [56/268] Linking static target lib/librte_meter.a 00:01:41.511 [57/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:41.511 [58/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:41.511 [59/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:41.511 [60/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:41.511 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:41.511 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:41.511 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:41.511 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:41.511 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:41.511 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:41.511 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:41.511 [68/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:41.511 [69/268] Linking static target lib/librte_telemetry.a 00:01:41.511 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:41.511 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:41.511 [72/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:41.511 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:41.511 [74/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:41.511 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:41.511 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:41.511 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:41.511 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:41.511 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:41.511 [80/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:41.511 [81/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:41.511 [82/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:41.511 [83/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.511 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:41.511 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:41.771 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:41.771 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:41.771 [88/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:41.771 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:41.771 [90/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:41.771 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:41.771 [92/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:41.771 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:41.771 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:41.771 [95/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:41.771 [96/268] Linking static target lib/librte_ring.a 00:01:41.771 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:41.771 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:41.771 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:41.771 [100/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:41.771 [101/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:41.771 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:41.771 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:41.771 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:41.771 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:41.771 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:41.771 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:41.771 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:41.771 [109/268] Linking static target lib/librte_net.a 00:01:41.771 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:41.771 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:41.772 [112/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:41.772 [113/268] Linking static target lib/librte_mempool.a 00:01:41.772 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:41.772 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:41.772 [116/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:41.772 [117/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:41.772 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:41.772 [119/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:41.772 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:41.772 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:41.772 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:41.772 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:41.772 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:41.772 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:41.772 [126/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:41.772 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:41.772 [128/268] Linking static target lib/librte_rcu.a 00:01:41.772 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:41.772 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:41.772 [131/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.772 [132/268] Linking static target lib/librte_cmdline.a 00:01:41.772 [133/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:41.772 [134/268] Linking static target lib/librte_eal.a 00:01:41.772 [135/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.772 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:41.772 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:41.772 [138/268] Linking target lib/librte_log.so.24.1 00:01:42.030 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.030 [140/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.030 [141/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:42.030 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:42.030 [143/268] Linking static target lib/librte_mbuf.a 00:01:42.030 [144/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:42.030 [145/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:42.030 [146/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:42.030 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:42.030 [148/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:42.030 [149/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:42.030 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:42.030 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:42.030 [152/268] Linking static target lib/librte_dmadev.a 00:01:42.030 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:42.030 [154/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:42.030 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:42.030 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:42.030 [157/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.030 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:42.030 [159/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:42.030 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:42.030 [161/268] Linking target lib/librte_kvargs.so.24.1 00:01:42.030 [162/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.030 [163/268] Linking target lib/librte_telemetry.so.24.1 00:01:42.030 [164/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:42.030 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:42.030 [166/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:42.030 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:42.030 [168/268] Linking static target lib/librte_timer.a 00:01:42.030 [169/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:42.030 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:42.030 [171/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:42.030 [172/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:42.030 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:42.030 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:42.030 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:42.030 [176/268] Linking static target lib/librte_compressdev.a 00:01:42.030 [177/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:42.030 [178/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:42.030 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:42.290 [180/268] Linking static target lib/librte_reorder.a 00:01:42.290 [181/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:42.290 [182/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:42.290 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:42.290 [184/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:42.290 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:42.290 [186/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:42.290 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:42.290 [188/268] Linking static target lib/librte_power.a 00:01:42.290 [189/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:42.290 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:42.290 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:42.290 [192/268] Linking static target lib/librte_hash.a 00:01:42.290 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:42.290 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:42.290 [195/268] Linking static target lib/librte_security.a 00:01:42.290 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:42.290 [197/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:42.290 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:42.290 [199/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:42.290 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:42.290 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.290 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.290 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:42.290 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:42.290 [205/268] Linking static target drivers/librte_bus_vdev.a 00:01:42.290 [206/268] Linking static target drivers/librte_mempool_ring.a 00:01:42.290 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:42.290 [208/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.549 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:42.549 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:42.549 [211/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:42.549 [212/268] Linking static target lib/librte_cryptodev.a 00:01:42.549 [213/268] Linking static target drivers/librte_bus_pci.a 00:01:42.549 [214/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.549 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.549 [216/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.549 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.807 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:42.807 [219/268] Linking static target lib/librte_ethdev.a 00:01:42.807 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.807 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.807 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.807 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.067 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:43.067 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.067 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.067 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.006 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:44.006 [229/268] Linking static target lib/librte_vhost.a 00:01:44.266 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.175 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.459 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.030 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.030 [234/268] Linking target lib/librte_eal.so.24.1 00:01:52.291 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:52.291 [236/268] Linking target lib/librte_timer.so.24.1 00:01:52.291 [237/268] Linking target lib/librte_ring.so.24.1 00:01:52.291 [238/268] Linking target lib/librte_meter.so.24.1 00:01:52.291 [239/268] Linking target lib/librte_pci.so.24.1 00:01:52.291 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:52.291 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:52.291 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:52.291 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:52.291 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:52.291 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:52.291 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:52.291 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:52.291 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:52.291 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:52.550 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:52.550 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:52.550 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:52.550 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:52.809 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:52.809 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:52.809 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:52.809 [257/268] Linking target lib/librte_net.so.24.1 00:01:52.809 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:52.809 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:52.809 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:52.809 [261/268] Linking target lib/librte_security.so.24.1 00:01:52.809 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:52.809 [263/268] Linking target lib/librte_hash.so.24.1 00:01:53.070 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:53.070 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:53.070 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:53.070 [267/268] Linking target lib/librte_power.so.24.1 00:01:53.070 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:53.070 INFO: autodetecting backend as ninja 00:01:53.070 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:54.009 CC lib/log/log.o 00:01:54.009 CC lib/log/log_flags.o 00:01:54.009 CC lib/log/log_deprecated.o 00:01:54.009 CC lib/ut_mock/mock.o 00:01:54.009 CC lib/ut/ut.o 00:01:54.269 LIB libspdk_ut_mock.a 00:01:54.269 LIB libspdk_log.a 00:01:54.269 LIB libspdk_ut.a 00:01:54.269 SO libspdk_ut_mock.so.6.0 00:01:54.269 SO libspdk_ut.so.2.0 00:01:54.269 SO libspdk_log.so.7.0 00:01:54.269 SYMLINK libspdk_ut_mock.so 00:01:54.529 SYMLINK libspdk_ut.so 00:01:54.529 SYMLINK libspdk_log.so 00:01:54.788 CC lib/dma/dma.o 00:01:54.788 CC lib/ioat/ioat.o 00:01:54.788 CXX lib/trace_parser/trace.o 00:01:54.788 CC lib/util/base64.o 00:01:54.788 CC lib/util/bit_array.o 00:01:54.788 CC lib/util/cpuset.o 00:01:54.788 CC lib/util/crc16.o 00:01:54.788 CC lib/util/crc32.o 00:01:54.788 CC lib/util/crc32c.o 00:01:54.788 CC lib/util/crc32_ieee.o 00:01:54.788 CC lib/util/crc64.o 00:01:54.788 CC lib/util/dif.o 00:01:54.788 CC lib/util/fd.o 00:01:54.788 CC lib/util/file.o 00:01:54.788 CC lib/util/hexlify.o 00:01:54.788 CC lib/util/iov.o 00:01:54.788 CC lib/util/math.o 00:01:54.788 CC lib/util/pipe.o 00:01:54.788 CC lib/util/strerror_tls.o 00:01:54.788 CC lib/util/string.o 00:01:54.788 CC lib/util/uuid.o 00:01:54.788 CC lib/util/fd_group.o 00:01:54.788 CC lib/util/xor.o 00:01:54.788 CC lib/util/zipf.o 00:01:54.788 CC lib/vfio_user/host/vfio_user_pci.o 00:01:54.788 CC lib/vfio_user/host/vfio_user.o 00:01:54.788 LIB libspdk_dma.a 00:01:55.047 SO libspdk_dma.so.4.0 00:01:55.047 LIB libspdk_ioat.a 00:01:55.047 SYMLINK libspdk_dma.so 00:01:55.047 SO libspdk_ioat.so.7.0 00:01:55.047 SYMLINK libspdk_ioat.so 00:01:55.047 LIB libspdk_vfio_user.a 00:01:55.047 SO libspdk_vfio_user.so.5.0 00:01:55.047 LIB libspdk_util.a 00:01:55.307 SYMLINK libspdk_vfio_user.so 00:01:55.307 SO libspdk_util.so.9.1 00:01:55.307 SYMLINK libspdk_util.so 00:01:55.568 CC lib/conf/conf.o 00:01:55.568 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:55.568 CC lib/rdma_provider/common.o 00:01:55.568 CC lib/idxd/idxd.o 00:01:55.568 CC lib/idxd/idxd_user.o 00:01:55.568 CC lib/idxd/idxd_kernel.o 00:01:55.568 CC lib/json/json_parse.o 00:01:55.568 CC lib/vmd/vmd.o 00:01:55.568 CC lib/env_dpdk/env.o 00:01:55.568 CC lib/rdma_utils/rdma_utils.o 00:01:55.568 CC lib/json/json_util.o 00:01:55.568 CC lib/vmd/led.o 00:01:55.568 CC lib/env_dpdk/memory.o 00:01:55.568 CC lib/json/json_write.o 00:01:55.568 CC lib/env_dpdk/pci.o 00:01:55.568 CC lib/env_dpdk/init.o 00:01:55.568 CC lib/env_dpdk/threads.o 00:01:55.568 CC lib/env_dpdk/pci_ioat.o 00:01:55.568 CC lib/env_dpdk/pci_virtio.o 00:01:55.568 CC lib/env_dpdk/pci_vmd.o 00:01:55.568 CC lib/env_dpdk/pci_idxd.o 00:01:55.568 CC lib/env_dpdk/pci_event.o 00:01:55.568 CC lib/env_dpdk/sigbus_handler.o 00:01:55.568 CC lib/env_dpdk/pci_dpdk.o 00:01:55.568 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:55.568 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:55.828 LIB libspdk_conf.a 00:01:55.828 LIB libspdk_rdma_provider.a 00:01:55.828 SO libspdk_conf.so.6.0 00:01:55.828 SO libspdk_rdma_provider.so.6.0 00:01:55.828 LIB libspdk_rdma_utils.a 00:01:55.828 SYMLINK libspdk_conf.so 00:01:55.828 LIB libspdk_json.a 00:01:55.828 SO libspdk_rdma_utils.so.1.0 00:01:55.828 SYMLINK libspdk_rdma_provider.so 00:01:56.088 SO libspdk_json.so.6.0 00:01:56.088 SYMLINK libspdk_rdma_utils.so 00:01:56.088 SYMLINK libspdk_json.so 00:01:56.088 LIB libspdk_idxd.a 00:01:56.088 SO libspdk_idxd.so.12.0 00:01:56.088 LIB libspdk_vmd.a 00:01:56.088 SO libspdk_vmd.so.6.0 00:01:56.088 SYMLINK libspdk_idxd.so 00:01:56.349 LIB libspdk_trace_parser.a 00:01:56.349 SYMLINK libspdk_vmd.so 00:01:56.349 SO libspdk_trace_parser.so.5.0 00:01:56.349 CC lib/jsonrpc/jsonrpc_server.o 00:01:56.349 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:56.349 CC lib/jsonrpc/jsonrpc_client.o 00:01:56.349 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:56.349 SYMLINK libspdk_trace_parser.so 00:01:56.609 LIB libspdk_jsonrpc.a 00:01:56.609 SO libspdk_jsonrpc.so.6.0 00:01:56.609 SYMLINK libspdk_jsonrpc.so 00:01:56.609 LIB libspdk_env_dpdk.a 00:01:56.868 SO libspdk_env_dpdk.so.14.1 00:01:56.868 SYMLINK libspdk_env_dpdk.so 00:01:56.868 CC lib/rpc/rpc.o 00:01:57.128 LIB libspdk_rpc.a 00:01:57.128 SO libspdk_rpc.so.6.0 00:01:57.128 SYMLINK libspdk_rpc.so 00:01:57.697 CC lib/trace/trace.o 00:01:57.697 CC lib/trace/trace_flags.o 00:01:57.697 CC lib/notify/notify.o 00:01:57.697 CC lib/notify/notify_rpc.o 00:01:57.697 CC lib/trace/trace_rpc.o 00:01:57.697 CC lib/keyring/keyring.o 00:01:57.697 CC lib/keyring/keyring_rpc.o 00:01:57.697 LIB libspdk_notify.a 00:01:57.697 SO libspdk_notify.so.6.0 00:01:57.697 LIB libspdk_keyring.a 00:01:57.697 LIB libspdk_trace.a 00:01:57.697 SO libspdk_trace.so.10.0 00:01:57.697 SO libspdk_keyring.so.1.0 00:01:57.697 SYMLINK libspdk_notify.so 00:01:57.957 SYMLINK libspdk_trace.so 00:01:57.957 SYMLINK libspdk_keyring.so 00:01:58.217 CC lib/thread/thread.o 00:01:58.217 CC lib/thread/iobuf.o 00:01:58.217 CC lib/sock/sock.o 00:01:58.217 CC lib/sock/sock_rpc.o 00:01:58.478 LIB libspdk_sock.a 00:01:58.478 SO libspdk_sock.so.10.0 00:01:58.478 SYMLINK libspdk_sock.so 00:01:58.737 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:58.737 CC lib/nvme/nvme_ctrlr.o 00:01:58.737 CC lib/nvme/nvme_fabric.o 00:01:58.737 CC lib/nvme/nvme_ns_cmd.o 00:01:58.737 CC lib/nvme/nvme_ns.o 00:01:58.737 CC lib/nvme/nvme_pcie_common.o 00:01:58.737 CC lib/nvme/nvme_pcie.o 00:01:58.737 CC lib/nvme/nvme_qpair.o 00:01:58.737 CC lib/nvme/nvme.o 00:01:58.737 CC lib/nvme/nvme_quirks.o 00:01:58.737 CC lib/nvme/nvme_transport.o 00:01:58.737 CC lib/nvme/nvme_discovery.o 00:01:58.737 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:58.737 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:58.738 CC lib/nvme/nvme_tcp.o 00:01:58.738 CC lib/nvme/nvme_opal.o 00:01:58.738 CC lib/nvme/nvme_io_msg.o 00:01:58.738 CC lib/nvme/nvme_poll_group.o 00:01:58.738 CC lib/nvme/nvme_zns.o 00:01:58.738 CC lib/nvme/nvme_stubs.o 00:01:58.738 CC lib/nvme/nvme_auth.o 00:01:58.738 CC lib/nvme/nvme_cuse.o 00:01:58.738 CC lib/nvme/nvme_vfio_user.o 00:01:58.738 CC lib/nvme/nvme_rdma.o 00:01:59.306 LIB libspdk_thread.a 00:01:59.306 SO libspdk_thread.so.10.1 00:01:59.306 SYMLINK libspdk_thread.so 00:01:59.566 CC lib/virtio/virtio.o 00:01:59.566 CC lib/virtio/virtio_vhost_user.o 00:01:59.566 CC lib/virtio/virtio_pci.o 00:01:59.566 CC lib/virtio/virtio_vfio_user.o 00:01:59.566 CC lib/blob/blobstore.o 00:01:59.566 CC lib/vfu_tgt/tgt_endpoint.o 00:01:59.566 CC lib/vfu_tgt/tgt_rpc.o 00:01:59.566 CC lib/blob/request.o 00:01:59.566 CC lib/blob/zeroes.o 00:01:59.566 CC lib/blob/blob_bs_dev.o 00:01:59.566 CC lib/accel/accel.o 00:01:59.566 CC lib/accel/accel_rpc.o 00:01:59.566 CC lib/accel/accel_sw.o 00:01:59.566 CC lib/init/json_config.o 00:01:59.566 CC lib/init/subsystem.o 00:01:59.566 CC lib/init/subsystem_rpc.o 00:01:59.566 CC lib/init/rpc.o 00:01:59.825 LIB libspdk_init.a 00:01:59.825 SO libspdk_init.so.5.0 00:01:59.825 LIB libspdk_virtio.a 00:01:59.825 LIB libspdk_vfu_tgt.a 00:01:59.825 SO libspdk_virtio.so.7.0 00:01:59.825 SO libspdk_vfu_tgt.so.3.0 00:01:59.825 SYMLINK libspdk_init.so 00:01:59.825 SYMLINK libspdk_vfu_tgt.so 00:01:59.825 SYMLINK libspdk_virtio.so 00:02:00.083 CC lib/event/app.o 00:02:00.083 CC lib/event/reactor.o 00:02:00.083 CC lib/event/log_rpc.o 00:02:00.083 CC lib/event/app_rpc.o 00:02:00.083 CC lib/event/scheduler_static.o 00:02:00.343 LIB libspdk_accel.a 00:02:00.343 SO libspdk_accel.so.15.1 00:02:00.343 SYMLINK libspdk_accel.so 00:02:00.343 LIB libspdk_nvme.a 00:02:00.602 LIB libspdk_event.a 00:02:00.602 SO libspdk_event.so.14.0 00:02:00.602 SO libspdk_nvme.so.13.1 00:02:00.602 SYMLINK libspdk_event.so 00:02:00.602 CC lib/bdev/bdev.o 00:02:00.602 CC lib/bdev/bdev_rpc.o 00:02:00.602 CC lib/bdev/bdev_zone.o 00:02:00.602 CC lib/bdev/part.o 00:02:00.602 CC lib/bdev/scsi_nvme.o 00:02:00.862 SYMLINK libspdk_nvme.so 00:02:01.801 LIB libspdk_blob.a 00:02:01.801 SO libspdk_blob.so.11.0 00:02:01.801 SYMLINK libspdk_blob.so 00:02:02.061 CC lib/lvol/lvol.o 00:02:02.061 CC lib/blobfs/blobfs.o 00:02:02.061 CC lib/blobfs/tree.o 00:02:02.631 LIB libspdk_bdev.a 00:02:02.631 SO libspdk_bdev.so.15.1 00:02:02.631 SYMLINK libspdk_bdev.so 00:02:02.631 LIB libspdk_blobfs.a 00:02:02.631 SO libspdk_blobfs.so.10.0 00:02:02.631 LIB libspdk_lvol.a 00:02:02.891 SO libspdk_lvol.so.10.0 00:02:02.891 SYMLINK libspdk_blobfs.so 00:02:02.891 SYMLINK libspdk_lvol.so 00:02:02.891 CC lib/nbd/nbd.o 00:02:02.891 CC lib/nbd/nbd_rpc.o 00:02:02.891 CC lib/scsi/dev.o 00:02:02.891 CC lib/scsi/lun.o 00:02:02.891 CC lib/scsi/port.o 00:02:02.891 CC lib/scsi/scsi.o 00:02:02.891 CC lib/scsi/scsi_bdev.o 00:02:02.891 CC lib/nvmf/ctrlr.o 00:02:02.891 CC lib/scsi/scsi_pr.o 00:02:02.891 CC lib/nvmf/ctrlr_discovery.o 00:02:02.891 CC lib/scsi/scsi_rpc.o 00:02:02.891 CC lib/nvmf/ctrlr_bdev.o 00:02:02.891 CC lib/scsi/task.o 00:02:02.891 CC lib/nvmf/subsystem.o 00:02:02.891 CC lib/nvmf/nvmf.o 00:02:02.891 CC lib/nvmf/nvmf_rpc.o 00:02:02.891 CC lib/ublk/ublk_rpc.o 00:02:02.891 CC lib/ublk/ublk.o 00:02:02.891 CC lib/ftl/ftl_core.o 00:02:02.891 CC lib/nvmf/transport.o 00:02:02.891 CC lib/ftl/ftl_init.o 00:02:02.891 CC lib/nvmf/tcp.o 00:02:02.891 CC lib/ftl/ftl_layout.o 00:02:02.891 CC lib/nvmf/stubs.o 00:02:02.891 CC lib/ftl/ftl_debug.o 00:02:02.891 CC lib/nvmf/mdns_server.o 00:02:02.891 CC lib/ftl/ftl_io.o 00:02:02.891 CC lib/nvmf/vfio_user.o 00:02:02.891 CC lib/ftl/ftl_sb.o 00:02:02.891 CC lib/ftl/ftl_l2p.o 00:02:02.891 CC lib/nvmf/rdma.o 00:02:02.891 CC lib/nvmf/auth.o 00:02:02.891 CC lib/ftl/ftl_l2p_flat.o 00:02:02.891 CC lib/ftl/ftl_band.o 00:02:02.891 CC lib/ftl/ftl_nv_cache.o 00:02:02.891 CC lib/ftl/ftl_band_ops.o 00:02:02.891 CC lib/ftl/ftl_writer.o 00:02:02.891 CC lib/ftl/ftl_rq.o 00:02:02.891 CC lib/ftl/ftl_reloc.o 00:02:02.891 CC lib/ftl/ftl_l2p_cache.o 00:02:02.891 CC lib/ftl/ftl_p2l.o 00:02:02.891 CC lib/ftl/mngt/ftl_mngt.o 00:02:02.891 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:02.891 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:02.891 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:02.891 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:02.891 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:02.891 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:02.891 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:02.891 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:02.891 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:02.891 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:02.891 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:02.891 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:02.891 CC lib/ftl/utils/ftl_conf.o 00:02:02.891 CC lib/ftl/utils/ftl_md.o 00:02:02.891 CC lib/ftl/utils/ftl_mempool.o 00:02:02.891 CC lib/ftl/utils/ftl_bitmap.o 00:02:02.891 CC lib/ftl/utils/ftl_property.o 00:02:02.891 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:02.891 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:02.891 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:02.891 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:02.891 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:02.891 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:02.891 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:02.891 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:02.891 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:02.891 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:02.891 CC lib/ftl/base/ftl_base_bdev.o 00:02:02.891 CC lib/ftl/ftl_trace.o 00:02:02.891 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:02.891 CC lib/ftl/base/ftl_base_dev.o 00:02:03.459 LIB libspdk_nbd.a 00:02:03.459 LIB libspdk_scsi.a 00:02:03.459 SO libspdk_nbd.so.7.0 00:02:03.459 SO libspdk_scsi.so.9.0 00:02:03.459 SYMLINK libspdk_nbd.so 00:02:03.718 SYMLINK libspdk_scsi.so 00:02:03.718 LIB libspdk_ublk.a 00:02:03.718 SO libspdk_ublk.so.3.0 00:02:03.718 SYMLINK libspdk_ublk.so 00:02:03.977 CC lib/iscsi/init_grp.o 00:02:03.977 CC lib/iscsi/conn.o 00:02:03.977 CC lib/iscsi/iscsi.o 00:02:03.977 CC lib/iscsi/md5.o 00:02:03.977 CC lib/iscsi/param.o 00:02:03.977 CC lib/iscsi/portal_grp.o 00:02:03.977 CC lib/iscsi/tgt_node.o 00:02:03.977 CC lib/iscsi/iscsi_subsystem.o 00:02:03.977 CC lib/vhost/vhost.o 00:02:03.977 CC lib/iscsi/iscsi_rpc.o 00:02:03.977 CC lib/iscsi/task.o 00:02:03.977 CC lib/vhost/vhost_rpc.o 00:02:03.977 CC lib/vhost/vhost_scsi.o 00:02:03.977 CC lib/vhost/vhost_blk.o 00:02:03.977 CC lib/vhost/rte_vhost_user.o 00:02:03.977 LIB libspdk_ftl.a 00:02:03.977 SO libspdk_ftl.so.9.0 00:02:04.237 SYMLINK libspdk_ftl.so 00:02:04.497 LIB libspdk_nvmf.a 00:02:04.757 LIB libspdk_vhost.a 00:02:04.757 SO libspdk_nvmf.so.18.1 00:02:04.757 SO libspdk_vhost.so.8.0 00:02:04.757 SYMLINK libspdk_vhost.so 00:02:04.757 SYMLINK libspdk_nvmf.so 00:02:04.757 LIB libspdk_iscsi.a 00:02:05.016 SO libspdk_iscsi.so.8.0 00:02:05.016 SYMLINK libspdk_iscsi.so 00:02:05.585 CC module/env_dpdk/env_dpdk_rpc.o 00:02:05.585 CC module/vfu_device/vfu_virtio.o 00:02:05.585 CC module/vfu_device/vfu_virtio_blk.o 00:02:05.585 CC module/vfu_device/vfu_virtio_scsi.o 00:02:05.585 CC module/vfu_device/vfu_virtio_rpc.o 00:02:05.845 CC module/blob/bdev/blob_bdev.o 00:02:05.845 LIB libspdk_env_dpdk_rpc.a 00:02:05.845 CC module/accel/ioat/accel_ioat.o 00:02:05.845 CC module/accel/ioat/accel_ioat_rpc.o 00:02:05.845 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:05.845 CC module/keyring/linux/keyring.o 00:02:05.845 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:05.845 CC module/accel/error/accel_error.o 00:02:05.845 CC module/accel/dsa/accel_dsa.o 00:02:05.845 CC module/keyring/linux/keyring_rpc.o 00:02:05.845 CC module/accel/iaa/accel_iaa_rpc.o 00:02:05.845 CC module/accel/iaa/accel_iaa.o 00:02:05.845 CC module/sock/posix/posix.o 00:02:05.845 CC module/accel/dsa/accel_dsa_rpc.o 00:02:05.845 CC module/accel/error/accel_error_rpc.o 00:02:05.845 CC module/keyring/file/keyring.o 00:02:05.845 CC module/scheduler/gscheduler/gscheduler.o 00:02:05.845 CC module/keyring/file/keyring_rpc.o 00:02:05.845 SO libspdk_env_dpdk_rpc.so.6.0 00:02:05.845 SYMLINK libspdk_env_dpdk_rpc.so 00:02:05.845 LIB libspdk_keyring_linux.a 00:02:05.845 LIB libspdk_scheduler_gscheduler.a 00:02:05.845 LIB libspdk_scheduler_dpdk_governor.a 00:02:05.845 LIB libspdk_keyring_file.a 00:02:05.845 SO libspdk_keyring_linux.so.1.0 00:02:05.845 LIB libspdk_accel_ioat.a 00:02:05.845 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:05.845 SO libspdk_scheduler_gscheduler.so.4.0 00:02:05.845 LIB libspdk_scheduler_dynamic.a 00:02:05.845 LIB libspdk_accel_error.a 00:02:05.845 LIB libspdk_accel_iaa.a 00:02:05.845 SO libspdk_keyring_file.so.1.0 00:02:05.845 SO libspdk_accel_ioat.so.6.0 00:02:05.845 SO libspdk_scheduler_dynamic.so.4.0 00:02:05.845 SO libspdk_accel_iaa.so.3.0 00:02:05.845 SYMLINK libspdk_keyring_linux.so 00:02:05.845 SO libspdk_accel_error.so.2.0 00:02:06.105 LIB libspdk_blob_bdev.a 00:02:06.105 LIB libspdk_accel_dsa.a 00:02:06.105 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:06.105 SYMLINK libspdk_scheduler_gscheduler.so 00:02:06.105 SYMLINK libspdk_keyring_file.so 00:02:06.105 SYMLINK libspdk_accel_ioat.so 00:02:06.105 SO libspdk_blob_bdev.so.11.0 00:02:06.105 SYMLINK libspdk_accel_iaa.so 00:02:06.105 SO libspdk_accel_dsa.so.5.0 00:02:06.105 SYMLINK libspdk_scheduler_dynamic.so 00:02:06.105 SYMLINK libspdk_accel_error.so 00:02:06.105 SYMLINK libspdk_blob_bdev.so 00:02:06.105 LIB libspdk_vfu_device.a 00:02:06.105 SYMLINK libspdk_accel_dsa.so 00:02:06.105 SO libspdk_vfu_device.so.3.0 00:02:06.105 SYMLINK libspdk_vfu_device.so 00:02:06.366 LIB libspdk_sock_posix.a 00:02:06.366 SO libspdk_sock_posix.so.6.0 00:02:06.366 SYMLINK libspdk_sock_posix.so 00:02:06.625 CC module/bdev/error/vbdev_error.o 00:02:06.625 CC module/bdev/gpt/vbdev_gpt.o 00:02:06.625 CC module/bdev/gpt/gpt.o 00:02:06.625 CC module/bdev/error/vbdev_error_rpc.o 00:02:06.625 CC module/bdev/aio/bdev_aio_rpc.o 00:02:06.625 CC module/bdev/aio/bdev_aio.o 00:02:06.625 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:06.625 CC module/bdev/lvol/vbdev_lvol.o 00:02:06.625 CC module/bdev/delay/vbdev_delay.o 00:02:06.625 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:06.625 CC module/bdev/raid/bdev_raid.o 00:02:06.625 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:06.625 CC module/bdev/raid/bdev_raid_rpc.o 00:02:06.625 CC module/bdev/raid/bdev_raid_sb.o 00:02:06.625 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:06.625 CC module/bdev/raid/raid0.o 00:02:06.625 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:06.625 CC module/bdev/raid/raid1.o 00:02:06.625 CC module/bdev/malloc/bdev_malloc.o 00:02:06.625 CC module/blobfs/bdev/blobfs_bdev.o 00:02:06.625 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:06.625 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:06.625 CC module/bdev/raid/concat.o 00:02:06.625 CC module/bdev/passthru/vbdev_passthru.o 00:02:06.625 CC module/bdev/null/bdev_null.o 00:02:06.625 CC module/bdev/null/bdev_null_rpc.o 00:02:06.625 CC module/bdev/nvme/bdev_nvme.o 00:02:06.625 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:06.625 CC module/bdev/nvme/nvme_rpc.o 00:02:06.625 CC module/bdev/split/vbdev_split.o 00:02:06.625 CC module/bdev/nvme/bdev_mdns_client.o 00:02:06.625 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:06.625 CC module/bdev/nvme/vbdev_opal.o 00:02:06.625 CC module/bdev/split/vbdev_split_rpc.o 00:02:06.625 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:06.625 CC module/bdev/ftl/bdev_ftl.o 00:02:06.625 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:06.625 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:06.625 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:06.625 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:06.625 CC module/bdev/iscsi/bdev_iscsi.o 00:02:06.625 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:06.625 LIB libspdk_blobfs_bdev.a 00:02:06.885 SO libspdk_blobfs_bdev.so.6.0 00:02:06.885 LIB libspdk_bdev_gpt.a 00:02:06.885 SYMLINK libspdk_blobfs_bdev.so 00:02:06.885 LIB libspdk_bdev_split.a 00:02:06.885 SO libspdk_bdev_gpt.so.6.0 00:02:06.885 LIB libspdk_bdev_passthru.a 00:02:06.885 LIB libspdk_bdev_error.a 00:02:06.885 LIB libspdk_bdev_null.a 00:02:06.885 LIB libspdk_bdev_ftl.a 00:02:06.885 SO libspdk_bdev_split.so.6.0 00:02:06.885 SO libspdk_bdev_passthru.so.6.0 00:02:06.885 LIB libspdk_bdev_zone_block.a 00:02:06.885 LIB libspdk_bdev_aio.a 00:02:06.885 SO libspdk_bdev_error.so.6.0 00:02:06.885 SO libspdk_bdev_ftl.so.6.0 00:02:06.885 SO libspdk_bdev_null.so.6.0 00:02:06.885 SYMLINK libspdk_bdev_gpt.so 00:02:06.885 LIB libspdk_bdev_iscsi.a 00:02:06.885 SO libspdk_bdev_aio.so.6.0 00:02:06.885 SO libspdk_bdev_zone_block.so.6.0 00:02:06.885 LIB libspdk_bdev_malloc.a 00:02:06.885 SYMLINK libspdk_bdev_split.so 00:02:06.885 SYMLINK libspdk_bdev_passthru.so 00:02:06.885 LIB libspdk_bdev_delay.a 00:02:06.885 SYMLINK libspdk_bdev_error.so 00:02:06.885 SO libspdk_bdev_iscsi.so.6.0 00:02:06.885 SYMLINK libspdk_bdev_ftl.so 00:02:06.885 SYMLINK libspdk_bdev_null.so 00:02:06.885 SO libspdk_bdev_malloc.so.6.0 00:02:06.885 SO libspdk_bdev_delay.so.6.0 00:02:06.885 SYMLINK libspdk_bdev_aio.so 00:02:06.885 SYMLINK libspdk_bdev_zone_block.so 00:02:06.885 SYMLINK libspdk_bdev_iscsi.so 00:02:07.145 SYMLINK libspdk_bdev_malloc.so 00:02:07.145 LIB libspdk_bdev_lvol.a 00:02:07.145 SYMLINK libspdk_bdev_delay.so 00:02:07.145 LIB libspdk_bdev_virtio.a 00:02:07.145 SO libspdk_bdev_lvol.so.6.0 00:02:07.145 SO libspdk_bdev_virtio.so.6.0 00:02:07.145 SYMLINK libspdk_bdev_lvol.so 00:02:07.145 SYMLINK libspdk_bdev_virtio.so 00:02:07.405 LIB libspdk_bdev_raid.a 00:02:07.405 SO libspdk_bdev_raid.so.6.0 00:02:07.405 SYMLINK libspdk_bdev_raid.so 00:02:08.343 LIB libspdk_bdev_nvme.a 00:02:08.343 SO libspdk_bdev_nvme.so.7.0 00:02:08.343 SYMLINK libspdk_bdev_nvme.so 00:02:08.912 CC module/event/subsystems/vmd/vmd.o 00:02:08.912 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:08.912 CC module/event/subsystems/iobuf/iobuf.o 00:02:08.912 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:08.912 CC module/event/subsystems/sock/sock.o 00:02:08.912 CC module/event/subsystems/keyring/keyring.o 00:02:08.912 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:08.912 CC module/event/subsystems/scheduler/scheduler.o 00:02:08.912 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:09.173 LIB libspdk_event_vhost_blk.a 00:02:09.173 LIB libspdk_event_vfu_tgt.a 00:02:09.173 LIB libspdk_event_vmd.a 00:02:09.173 LIB libspdk_event_keyring.a 00:02:09.173 LIB libspdk_event_sock.a 00:02:09.173 LIB libspdk_event_scheduler.a 00:02:09.173 LIB libspdk_event_iobuf.a 00:02:09.173 SO libspdk_event_vhost_blk.so.3.0 00:02:09.173 SO libspdk_event_vfu_tgt.so.3.0 00:02:09.173 SO libspdk_event_keyring.so.1.0 00:02:09.173 SO libspdk_event_vmd.so.6.0 00:02:09.173 SO libspdk_event_sock.so.5.0 00:02:09.173 SO libspdk_event_scheduler.so.4.0 00:02:09.173 SO libspdk_event_iobuf.so.3.0 00:02:09.173 SYMLINK libspdk_event_vhost_blk.so 00:02:09.173 SYMLINK libspdk_event_keyring.so 00:02:09.173 SYMLINK libspdk_event_vfu_tgt.so 00:02:09.173 SYMLINK libspdk_event_sock.so 00:02:09.173 SYMLINK libspdk_event_vmd.so 00:02:09.173 SYMLINK libspdk_event_scheduler.so 00:02:09.173 SYMLINK libspdk_event_iobuf.so 00:02:09.433 CC module/event/subsystems/accel/accel.o 00:02:09.693 LIB libspdk_event_accel.a 00:02:09.693 SO libspdk_event_accel.so.6.0 00:02:09.693 SYMLINK libspdk_event_accel.so 00:02:09.953 CC module/event/subsystems/bdev/bdev.o 00:02:10.212 LIB libspdk_event_bdev.a 00:02:10.212 SO libspdk_event_bdev.so.6.0 00:02:10.212 SYMLINK libspdk_event_bdev.so 00:02:10.471 CC module/event/subsystems/scsi/scsi.o 00:02:10.471 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:10.471 CC module/event/subsystems/ublk/ublk.o 00:02:10.471 CC module/event/subsystems/nbd/nbd.o 00:02:10.471 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:10.729 LIB libspdk_event_nbd.a 00:02:10.729 LIB libspdk_event_scsi.a 00:02:10.729 LIB libspdk_event_ublk.a 00:02:10.729 SO libspdk_event_nbd.so.6.0 00:02:10.729 SO libspdk_event_scsi.so.6.0 00:02:10.729 SO libspdk_event_ublk.so.3.0 00:02:10.729 LIB libspdk_event_nvmf.a 00:02:10.729 SYMLINK libspdk_event_scsi.so 00:02:10.729 SYMLINK libspdk_event_nbd.so 00:02:10.729 SYMLINK libspdk_event_ublk.so 00:02:10.729 SO libspdk_event_nvmf.so.6.0 00:02:10.988 SYMLINK libspdk_event_nvmf.so 00:02:10.988 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:11.248 CC module/event/subsystems/iscsi/iscsi.o 00:02:11.248 LIB libspdk_event_vhost_scsi.a 00:02:11.248 LIB libspdk_event_iscsi.a 00:02:11.248 SO libspdk_event_vhost_scsi.so.3.0 00:02:11.248 SO libspdk_event_iscsi.so.6.0 00:02:11.248 SYMLINK libspdk_event_vhost_scsi.so 00:02:11.248 SYMLINK libspdk_event_iscsi.so 00:02:11.508 SO libspdk.so.6.0 00:02:11.508 SYMLINK libspdk.so 00:02:11.767 CXX app/trace/trace.o 00:02:11.767 CC app/spdk_nvme_perf/perf.o 00:02:11.767 CC app/spdk_top/spdk_top.o 00:02:11.767 CC app/trace_record/trace_record.o 00:02:11.767 CC app/spdk_nvme_discover/discovery_aer.o 00:02:11.767 CC test/rpc_client/rpc_client_test.o 00:02:11.767 CC app/spdk_nvme_identify/identify.o 00:02:11.767 CC app/spdk_lspci/spdk_lspci.o 00:02:11.767 TEST_HEADER include/spdk/accel_module.h 00:02:11.767 TEST_HEADER include/spdk/accel.h 00:02:11.767 TEST_HEADER include/spdk/base64.h 00:02:11.767 TEST_HEADER include/spdk/barrier.h 00:02:11.767 TEST_HEADER include/spdk/assert.h 00:02:11.767 TEST_HEADER include/spdk/bit_array.h 00:02:11.767 TEST_HEADER include/spdk/bdev.h 00:02:11.767 TEST_HEADER include/spdk/bdev_zone.h 00:02:11.767 TEST_HEADER include/spdk/bdev_module.h 00:02:11.767 TEST_HEADER include/spdk/bit_pool.h 00:02:11.767 TEST_HEADER include/spdk/blob_bdev.h 00:02:11.767 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:11.767 TEST_HEADER include/spdk/conf.h 00:02:11.767 TEST_HEADER include/spdk/blobfs.h 00:02:11.767 TEST_HEADER include/spdk/blob.h 00:02:11.767 TEST_HEADER include/spdk/config.h 00:02:11.767 TEST_HEADER include/spdk/crc32.h 00:02:11.767 TEST_HEADER include/spdk/cpuset.h 00:02:11.767 TEST_HEADER include/spdk/crc16.h 00:02:11.767 TEST_HEADER include/spdk/crc64.h 00:02:11.767 TEST_HEADER include/spdk/dma.h 00:02:11.767 TEST_HEADER include/spdk/dif.h 00:02:11.767 TEST_HEADER include/spdk/endian.h 00:02:11.767 TEST_HEADER include/spdk/event.h 00:02:11.767 TEST_HEADER include/spdk/env_dpdk.h 00:02:11.767 TEST_HEADER include/spdk/fd_group.h 00:02:12.033 TEST_HEADER include/spdk/env.h 00:02:12.033 CC app/spdk_dd/spdk_dd.o 00:02:12.033 TEST_HEADER include/spdk/fd.h 00:02:12.033 TEST_HEADER include/spdk/file.h 00:02:12.033 TEST_HEADER include/spdk/ftl.h 00:02:12.033 TEST_HEADER include/spdk/gpt_spec.h 00:02:12.033 TEST_HEADER include/spdk/hexlify.h 00:02:12.033 TEST_HEADER include/spdk/histogram_data.h 00:02:12.033 TEST_HEADER include/spdk/idxd_spec.h 00:02:12.033 TEST_HEADER include/spdk/idxd.h 00:02:12.033 TEST_HEADER include/spdk/ioat.h 00:02:12.033 TEST_HEADER include/spdk/init.h 00:02:12.033 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:12.033 TEST_HEADER include/spdk/json.h 00:02:12.033 TEST_HEADER include/spdk/ioat_spec.h 00:02:12.033 TEST_HEADER include/spdk/iscsi_spec.h 00:02:12.033 CC app/iscsi_tgt/iscsi_tgt.o 00:02:12.033 TEST_HEADER include/spdk/keyring.h 00:02:12.033 TEST_HEADER include/spdk/jsonrpc.h 00:02:12.033 TEST_HEADER include/spdk/keyring_module.h 00:02:12.033 TEST_HEADER include/spdk/lvol.h 00:02:12.033 TEST_HEADER include/spdk/log.h 00:02:12.033 TEST_HEADER include/spdk/likely.h 00:02:12.033 TEST_HEADER include/spdk/memory.h 00:02:12.033 TEST_HEADER include/spdk/mmio.h 00:02:12.033 TEST_HEADER include/spdk/nbd.h 00:02:12.033 TEST_HEADER include/spdk/notify.h 00:02:12.033 CC app/nvmf_tgt/nvmf_main.o 00:02:12.033 TEST_HEADER include/spdk/nvme.h 00:02:12.033 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:12.033 TEST_HEADER include/spdk/nvme_intel.h 00:02:12.033 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:12.033 TEST_HEADER include/spdk/nvme_spec.h 00:02:12.033 TEST_HEADER include/spdk/nvme_zns.h 00:02:12.033 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:12.033 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:12.033 TEST_HEADER include/spdk/nvmf.h 00:02:12.033 TEST_HEADER include/spdk/nvmf_transport.h 00:02:12.033 TEST_HEADER include/spdk/nvmf_spec.h 00:02:12.033 TEST_HEADER include/spdk/opal.h 00:02:12.033 TEST_HEADER include/spdk/opal_spec.h 00:02:12.033 TEST_HEADER include/spdk/pci_ids.h 00:02:12.033 TEST_HEADER include/spdk/queue.h 00:02:12.033 TEST_HEADER include/spdk/pipe.h 00:02:12.033 TEST_HEADER include/spdk/rpc.h 00:02:12.033 TEST_HEADER include/spdk/scheduler.h 00:02:12.033 TEST_HEADER include/spdk/reduce.h 00:02:12.033 TEST_HEADER include/spdk/scsi.h 00:02:12.033 TEST_HEADER include/spdk/scsi_spec.h 00:02:12.033 TEST_HEADER include/spdk/sock.h 00:02:12.033 TEST_HEADER include/spdk/stdinc.h 00:02:12.033 TEST_HEADER include/spdk/string.h 00:02:12.033 TEST_HEADER include/spdk/thread.h 00:02:12.033 TEST_HEADER include/spdk/trace_parser.h 00:02:12.033 TEST_HEADER include/spdk/trace.h 00:02:12.033 TEST_HEADER include/spdk/ublk.h 00:02:12.033 TEST_HEADER include/spdk/tree.h 00:02:12.033 TEST_HEADER include/spdk/util.h 00:02:12.033 TEST_HEADER include/spdk/uuid.h 00:02:12.033 TEST_HEADER include/spdk/version.h 00:02:12.033 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:12.033 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:12.033 TEST_HEADER include/spdk/vmd.h 00:02:12.033 TEST_HEADER include/spdk/vhost.h 00:02:12.033 CC app/spdk_tgt/spdk_tgt.o 00:02:12.033 TEST_HEADER include/spdk/xor.h 00:02:12.033 TEST_HEADER include/spdk/zipf.h 00:02:12.033 CXX test/cpp_headers/accel.o 00:02:12.033 CXX test/cpp_headers/assert.o 00:02:12.033 CXX test/cpp_headers/barrier.o 00:02:12.033 CXX test/cpp_headers/base64.o 00:02:12.033 CXX test/cpp_headers/accel_module.o 00:02:12.033 CXX test/cpp_headers/bdev_module.o 00:02:12.033 CXX test/cpp_headers/bdev.o 00:02:12.033 CXX test/cpp_headers/bdev_zone.o 00:02:12.033 CXX test/cpp_headers/bit_array.o 00:02:12.033 CXX test/cpp_headers/bit_pool.o 00:02:12.033 CXX test/cpp_headers/blob_bdev.o 00:02:12.033 CXX test/cpp_headers/blobfs_bdev.o 00:02:12.033 CXX test/cpp_headers/blobfs.o 00:02:12.033 CXX test/cpp_headers/blob.o 00:02:12.033 CXX test/cpp_headers/cpuset.o 00:02:12.033 CXX test/cpp_headers/config.o 00:02:12.033 CXX test/cpp_headers/conf.o 00:02:12.033 CXX test/cpp_headers/dif.o 00:02:12.033 CXX test/cpp_headers/crc16.o 00:02:12.033 CXX test/cpp_headers/crc32.o 00:02:12.033 CXX test/cpp_headers/crc64.o 00:02:12.033 CXX test/cpp_headers/dma.o 00:02:12.033 CXX test/cpp_headers/endian.o 00:02:12.033 CXX test/cpp_headers/fd_group.o 00:02:12.033 CXX test/cpp_headers/env_dpdk.o 00:02:12.033 CXX test/cpp_headers/env.o 00:02:12.033 CXX test/cpp_headers/event.o 00:02:12.033 CXX test/cpp_headers/fd.o 00:02:12.033 CXX test/cpp_headers/file.o 00:02:12.033 CXX test/cpp_headers/ftl.o 00:02:12.033 CXX test/cpp_headers/histogram_data.o 00:02:12.033 CXX test/cpp_headers/gpt_spec.o 00:02:12.033 CXX test/cpp_headers/hexlify.o 00:02:12.033 CXX test/cpp_headers/idxd_spec.o 00:02:12.033 CXX test/cpp_headers/init.o 00:02:12.033 CXX test/cpp_headers/idxd.o 00:02:12.033 CXX test/cpp_headers/ioat.o 00:02:12.033 CXX test/cpp_headers/ioat_spec.o 00:02:12.033 CXX test/cpp_headers/iscsi_spec.o 00:02:12.033 CXX test/cpp_headers/json.o 00:02:12.033 CXX test/cpp_headers/keyring_module.o 00:02:12.033 CXX test/cpp_headers/jsonrpc.o 00:02:12.033 CXX test/cpp_headers/likely.o 00:02:12.033 CXX test/cpp_headers/keyring.o 00:02:12.033 CXX test/cpp_headers/lvol.o 00:02:12.033 CXX test/cpp_headers/memory.o 00:02:12.033 CXX test/cpp_headers/log.o 00:02:12.033 CXX test/cpp_headers/mmio.o 00:02:12.033 CXX test/cpp_headers/nbd.o 00:02:12.033 CXX test/cpp_headers/notify.o 00:02:12.033 CXX test/cpp_headers/nvme.o 00:02:12.033 CXX test/cpp_headers/nvme_ocssd.o 00:02:12.033 CXX test/cpp_headers/nvme_intel.o 00:02:12.033 CXX test/cpp_headers/nvme_spec.o 00:02:12.033 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:12.033 CXX test/cpp_headers/nvme_zns.o 00:02:12.033 CXX test/cpp_headers/nvmf_cmd.o 00:02:12.033 CC examples/ioat/verify/verify.o 00:02:12.033 CXX test/cpp_headers/nvmf.o 00:02:12.033 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:12.033 CXX test/cpp_headers/opal.o 00:02:12.033 CXX test/cpp_headers/nvmf_spec.o 00:02:12.034 CXX test/cpp_headers/nvmf_transport.o 00:02:12.034 CXX test/cpp_headers/pci_ids.o 00:02:12.034 CXX test/cpp_headers/pipe.o 00:02:12.034 CXX test/cpp_headers/opal_spec.o 00:02:12.034 CC examples/ioat/perf/perf.o 00:02:12.034 CXX test/cpp_headers/queue.o 00:02:12.034 CC examples/util/zipf/zipf.o 00:02:12.034 CC test/app/histogram_perf/histogram_perf.o 00:02:12.034 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:12.034 CC test/env/memory/memory_ut.o 00:02:12.034 CC test/thread/poller_perf/poller_perf.o 00:02:12.034 CXX test/cpp_headers/reduce.o 00:02:12.034 CC test/env/pci/pci_ut.o 00:02:12.034 CC test/app/jsoncat/jsoncat.o 00:02:12.034 CC test/app/stub/stub.o 00:02:12.034 CC app/fio/nvme/fio_plugin.o 00:02:12.034 CC test/env/vtophys/vtophys.o 00:02:12.310 CC test/app/bdev_svc/bdev_svc.o 00:02:12.310 CC test/dma/test_dma/test_dma.o 00:02:12.310 CC app/fio/bdev/fio_plugin.o 00:02:12.310 LINK rpc_client_test 00:02:12.310 LINK spdk_lspci 00:02:12.310 CC test/env/mem_callbacks/mem_callbacks.o 00:02:12.310 LINK interrupt_tgt 00:02:12.310 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:12.576 LINK nvmf_tgt 00:02:12.576 LINK spdk_trace_record 00:02:12.576 LINK iscsi_tgt 00:02:12.576 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:12.576 LINK spdk_nvme_discover 00:02:12.576 LINK histogram_perf 00:02:12.576 LINK poller_perf 00:02:12.576 LINK zipf 00:02:12.576 CXX test/cpp_headers/rpc.o 00:02:12.576 LINK env_dpdk_post_init 00:02:12.576 CXX test/cpp_headers/scheduler.o 00:02:12.576 CXX test/cpp_headers/scsi.o 00:02:12.576 CXX test/cpp_headers/scsi_spec.o 00:02:12.576 CXX test/cpp_headers/sock.o 00:02:12.576 CXX test/cpp_headers/stdinc.o 00:02:12.576 CXX test/cpp_headers/string.o 00:02:12.576 CXX test/cpp_headers/thread.o 00:02:12.576 CXX test/cpp_headers/trace.o 00:02:12.576 CXX test/cpp_headers/trace_parser.o 00:02:12.576 CXX test/cpp_headers/tree.o 00:02:12.576 CXX test/cpp_headers/ublk.o 00:02:12.576 CXX test/cpp_headers/util.o 00:02:12.576 CXX test/cpp_headers/uuid.o 00:02:12.576 CXX test/cpp_headers/version.o 00:02:12.576 CXX test/cpp_headers/vfio_user_pci.o 00:02:12.576 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:12.576 CXX test/cpp_headers/vhost.o 00:02:12.576 CXX test/cpp_headers/vfio_user_spec.o 00:02:12.576 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:12.576 CXX test/cpp_headers/vmd.o 00:02:12.576 LINK spdk_tgt 00:02:12.576 CXX test/cpp_headers/xor.o 00:02:12.576 CXX test/cpp_headers/zipf.o 00:02:12.576 LINK spdk_dd 00:02:12.576 LINK jsoncat 00:02:12.576 LINK spdk_trace 00:02:12.835 LINK vtophys 00:02:12.835 LINK stub 00:02:12.835 LINK verify 00:02:12.835 LINK bdev_svc 00:02:12.835 LINK ioat_perf 00:02:12.835 LINK nvme_fuzz 00:02:12.835 LINK test_dma 00:02:12.835 LINK pci_ut 00:02:13.094 CC test/event/reactor/reactor.o 00:02:13.094 CC examples/idxd/perf/perf.o 00:02:13.094 CC examples/vmd/lsvmd/lsvmd.o 00:02:13.094 CC test/event/event_perf/event_perf.o 00:02:13.094 CC test/event/reactor_perf/reactor_perf.o 00:02:13.094 CC test/event/app_repeat/app_repeat.o 00:02:13.094 CC examples/vmd/led/led.o 00:02:13.094 CC examples/sock/hello_world/hello_sock.o 00:02:13.094 CC test/event/scheduler/scheduler.o 00:02:13.094 CC examples/thread/thread/thread_ex.o 00:02:13.094 CC app/vhost/vhost.o 00:02:13.094 LINK vhost_fuzz 00:02:13.094 LINK spdk_bdev 00:02:13.094 LINK mem_callbacks 00:02:13.094 LINK reactor 00:02:13.094 LINK spdk_nvme_perf 00:02:13.094 LINK event_perf 00:02:13.094 LINK lsvmd 00:02:13.094 LINK reactor_perf 00:02:13.094 LINK led 00:02:13.094 LINK spdk_nvme 00:02:13.094 LINK app_repeat 00:02:13.353 LINK spdk_nvme_identify 00:02:13.353 LINK hello_sock 00:02:13.353 LINK thread 00:02:13.353 LINK idxd_perf 00:02:13.353 LINK vhost 00:02:13.353 LINK scheduler 00:02:13.353 LINK spdk_top 00:02:13.353 CC test/nvme/sgl/sgl.o 00:02:13.353 CC test/nvme/e2edp/nvme_dp.o 00:02:13.353 CC test/nvme/err_injection/err_injection.o 00:02:13.353 CC test/nvme/reset/reset.o 00:02:13.353 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:13.353 CC test/nvme/aer/aer.o 00:02:13.353 CC test/nvme/simple_copy/simple_copy.o 00:02:13.353 CC test/nvme/fused_ordering/fused_ordering.o 00:02:13.353 CC test/nvme/startup/startup.o 00:02:13.353 CC test/nvme/reserve/reserve.o 00:02:13.353 CC test/nvme/overhead/overhead.o 00:02:13.353 CC test/nvme/boot_partition/boot_partition.o 00:02:13.353 CC test/nvme/cuse/cuse.o 00:02:13.353 CC test/nvme/connect_stress/connect_stress.o 00:02:13.353 CC test/nvme/compliance/nvme_compliance.o 00:02:13.353 CC test/nvme/fdp/fdp.o 00:02:13.353 CC test/blobfs/mkfs/mkfs.o 00:02:13.353 CC test/accel/dif/dif.o 00:02:13.612 LINK memory_ut 00:02:13.612 CC test/lvol/esnap/esnap.o 00:02:13.612 LINK boot_partition 00:02:13.612 LINK err_injection 00:02:13.612 LINK startup 00:02:13.612 LINK connect_stress 00:02:13.612 LINK doorbell_aers 00:02:13.612 LINK fused_ordering 00:02:13.612 LINK simple_copy 00:02:13.612 LINK reserve 00:02:13.612 LINK mkfs 00:02:13.612 LINK sgl 00:02:13.612 LINK reset 00:02:13.612 LINK nvme_dp 00:02:13.612 LINK aer 00:02:13.612 LINK overhead 00:02:13.612 LINK nvme_compliance 00:02:13.612 CC examples/nvme/hotplug/hotplug.o 00:02:13.612 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:13.612 CC examples/nvme/reconnect/reconnect.o 00:02:13.612 LINK fdp 00:02:13.612 CC examples/nvme/arbitration/arbitration.o 00:02:13.612 CC examples/nvme/hello_world/hello_world.o 00:02:13.612 CC examples/nvme/abort/abort.o 00:02:13.612 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:13.612 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:13.871 CC examples/accel/perf/accel_perf.o 00:02:13.871 CC examples/blob/cli/blobcli.o 00:02:13.871 CC examples/blob/hello_world/hello_blob.o 00:02:13.871 LINK dif 00:02:13.871 LINK cmb_copy 00:02:13.871 LINK hotplug 00:02:13.871 LINK pmr_persistence 00:02:13.871 LINK hello_world 00:02:13.871 LINK arbitration 00:02:13.871 LINK iscsi_fuzz 00:02:13.871 LINK abort 00:02:14.130 LINK reconnect 00:02:14.130 LINK hello_blob 00:02:14.130 LINK nvme_manage 00:02:14.130 LINK accel_perf 00:02:14.130 LINK blobcli 00:02:14.389 CC test/bdev/bdevio/bdevio.o 00:02:14.389 LINK cuse 00:02:14.648 CC examples/bdev/hello_world/hello_bdev.o 00:02:14.648 CC examples/bdev/bdevperf/bdevperf.o 00:02:14.648 LINK bdevio 00:02:14.907 LINK hello_bdev 00:02:15.167 LINK bdevperf 00:02:15.736 CC examples/nvmf/nvmf/nvmf.o 00:02:15.996 LINK nvmf 00:02:16.935 LINK esnap 00:02:17.195 00:02:17.195 real 0m44.831s 00:02:17.195 user 6m30.247s 00:02:17.195 sys 3m26.734s 00:02:17.195 18:54:19 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:17.195 18:54:19 make -- common/autotest_common.sh@10 -- $ set +x 00:02:17.195 ************************************ 00:02:17.195 END TEST make 00:02:17.195 ************************************ 00:02:17.195 18:54:19 -- common/autotest_common.sh@1142 -- $ return 0 00:02:17.195 18:54:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:17.195 18:54:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:17.195 18:54:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:17.195 18:54:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.195 18:54:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:17.195 18:54:19 -- pm/common@44 -- $ pid=10215 00:02:17.195 18:54:19 -- pm/common@50 -- $ kill -TERM 10215 00:02:17.195 18:54:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.195 18:54:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:17.195 18:54:19 -- pm/common@44 -- $ pid=10217 00:02:17.195 18:54:19 -- pm/common@50 -- $ kill -TERM 10217 00:02:17.195 18:54:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.195 18:54:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:17.195 18:54:19 -- pm/common@44 -- $ pid=10218 00:02:17.195 18:54:19 -- pm/common@50 -- $ kill -TERM 10218 00:02:17.195 18:54:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.195 18:54:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:17.195 18:54:19 -- pm/common@44 -- $ pid=10242 00:02:17.195 18:54:19 -- pm/common@50 -- $ sudo -E kill -TERM 10242 00:02:17.456 18:54:19 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:17.456 18:54:19 -- nvmf/common.sh@7 -- # uname -s 00:02:17.456 18:54:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:17.456 18:54:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:17.456 18:54:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:17.456 18:54:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:17.456 18:54:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:17.456 18:54:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:17.456 18:54:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:17.456 18:54:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:17.456 18:54:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:17.456 18:54:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:17.456 18:54:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:17.456 18:54:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:17.456 18:54:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:17.456 18:54:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:17.456 18:54:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:17.456 18:54:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:17.456 18:54:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:17.456 18:54:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:17.456 18:54:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:17.456 18:54:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:17.456 18:54:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.456 18:54:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.456 18:54:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.456 18:54:19 -- paths/export.sh@5 -- # export PATH 00:02:17.456 18:54:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.456 18:54:19 -- nvmf/common.sh@47 -- # : 0 00:02:17.456 18:54:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:17.456 18:54:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:17.456 18:54:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:17.456 18:54:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:17.456 18:54:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:17.456 18:54:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:17.456 18:54:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:17.456 18:54:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:17.456 18:54:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:17.456 18:54:19 -- spdk/autotest.sh@32 -- # uname -s 00:02:17.456 18:54:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:17.456 18:54:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:17.456 18:54:19 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:17.456 18:54:19 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:17.456 18:54:19 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:17.456 18:54:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:17.456 18:54:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:17.456 18:54:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:17.456 18:54:19 -- spdk/autotest.sh@48 -- # udevadm_pid=70529 00:02:17.456 18:54:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:17.456 18:54:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:17.456 18:54:19 -- pm/common@17 -- # local monitor 00:02:17.456 18:54:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.456 18:54:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.456 18:54:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.456 18:54:19 -- pm/common@21 -- # date +%s 00:02:17.456 18:54:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.456 18:54:19 -- pm/common@21 -- # date +%s 00:02:17.456 18:54:19 -- pm/common@25 -- # sleep 1 00:02:17.456 18:54:19 -- pm/common@21 -- # date +%s 00:02:17.456 18:54:19 -- pm/common@21 -- # date +%s 00:02:17.456 18:54:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720803259 00:02:17.456 18:54:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720803259 00:02:17.457 18:54:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720803259 00:02:17.457 18:54:19 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720803259 00:02:17.457 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720803259_collect-vmstat.pm.log 00:02:17.457 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720803259_collect-cpu-load.pm.log 00:02:17.457 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720803259_collect-cpu-temp.pm.log 00:02:17.457 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720803259_collect-bmc-pm.bmc.pm.log 00:02:18.399 18:54:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:18.399 18:54:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:18.399 18:54:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:18.399 18:54:20 -- common/autotest_common.sh@10 -- # set +x 00:02:18.399 18:54:20 -- spdk/autotest.sh@59 -- # create_test_list 00:02:18.399 18:54:20 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:18.399 18:54:20 -- common/autotest_common.sh@10 -- # set +x 00:02:18.659 18:54:20 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:18.659 18:54:20 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.659 18:54:20 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.659 18:54:20 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:18.659 18:54:20 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.659 18:54:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:18.659 18:54:20 -- common/autotest_common.sh@1455 -- # uname 00:02:18.659 18:54:20 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:18.659 18:54:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:18.659 18:54:20 -- common/autotest_common.sh@1475 -- # uname 00:02:18.659 18:54:21 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:18.659 18:54:21 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:18.659 18:54:21 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:18.659 18:54:21 -- spdk/autotest.sh@72 -- # hash lcov 00:02:18.659 18:54:21 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:18.659 18:54:21 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:18.659 --rc lcov_branch_coverage=1 00:02:18.659 --rc lcov_function_coverage=1 00:02:18.659 --rc genhtml_branch_coverage=1 00:02:18.659 --rc genhtml_function_coverage=1 00:02:18.659 --rc genhtml_legend=1 00:02:18.659 --rc geninfo_all_blocks=1 00:02:18.659 ' 00:02:18.659 18:54:21 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:18.659 --rc lcov_branch_coverage=1 00:02:18.659 --rc lcov_function_coverage=1 00:02:18.659 --rc genhtml_branch_coverage=1 00:02:18.659 --rc genhtml_function_coverage=1 00:02:18.659 --rc genhtml_legend=1 00:02:18.659 --rc geninfo_all_blocks=1 00:02:18.659 ' 00:02:18.659 18:54:21 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:18.659 --rc lcov_branch_coverage=1 00:02:18.659 --rc lcov_function_coverage=1 00:02:18.659 --rc genhtml_branch_coverage=1 00:02:18.659 --rc genhtml_function_coverage=1 00:02:18.659 --rc genhtml_legend=1 00:02:18.659 --rc geninfo_all_blocks=1 00:02:18.659 --no-external' 00:02:18.659 18:54:21 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:18.659 --rc lcov_branch_coverage=1 00:02:18.659 --rc lcov_function_coverage=1 00:02:18.659 --rc genhtml_branch_coverage=1 00:02:18.659 --rc genhtml_function_coverage=1 00:02:18.659 --rc genhtml_legend=1 00:02:18.659 --rc geninfo_all_blocks=1 00:02:18.659 --no-external' 00:02:18.659 18:54:21 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:18.659 lcov: LCOV version 1.14 00:02:18.659 18:54:21 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:30.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:30.879 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:39.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:39.007 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:39.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:39.007 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:39.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:39.007 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:39.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:39.007 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:39.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:39.007 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:39.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:39.007 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:39.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:39.007 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:39.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:39.007 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:39.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:39.007 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:39.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:39.007 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:39.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:39.007 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:39.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:39.007 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:39.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:39.007 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:39.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:39.267 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:39.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:39.267 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:39.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:39.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:39.268 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:39.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:39.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:39.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:39.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:39.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:39.790 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:39.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:39.790 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:39.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:39.790 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:39.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:39.790 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:39.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:39.790 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:39.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:39.790 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:39.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:39.790 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:39.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:39.790 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:39.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:39.790 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:43.083 18:54:45 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:43.083 18:54:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:43.083 18:54:45 -- common/autotest_common.sh@10 -- # set +x 00:02:43.083 18:54:45 -- spdk/autotest.sh@91 -- # rm -f 00:02:43.083 18:54:45 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:46.379 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:46.379 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:46.379 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:46.379 18:54:48 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:46.379 18:54:48 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:46.379 18:54:48 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:46.379 18:54:48 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:46.379 18:54:48 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:46.379 18:54:48 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:46.379 18:54:48 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:46.379 18:54:48 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:46.379 18:54:48 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:46.379 18:54:48 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:46.379 18:54:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:46.379 18:54:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:46.379 18:54:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:46.379 18:54:48 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:46.379 18:54:48 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:46.379 No valid GPT data, bailing 00:02:46.379 18:54:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:46.379 18:54:48 -- scripts/common.sh@391 -- # pt= 00:02:46.379 18:54:48 -- scripts/common.sh@392 -- # return 1 00:02:46.380 18:54:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:46.380 1+0 records in 00:02:46.380 1+0 records out 00:02:46.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00585001 s, 179 MB/s 00:02:46.380 18:54:48 -- spdk/autotest.sh@118 -- # sync 00:02:46.380 18:54:48 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:46.380 18:54:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:46.380 18:54:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:51.662 18:54:53 -- spdk/autotest.sh@124 -- # uname -s 00:02:51.662 18:54:53 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:51.662 18:54:53 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:51.662 18:54:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:51.662 18:54:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:51.662 18:54:53 -- common/autotest_common.sh@10 -- # set +x 00:02:51.662 ************************************ 00:02:51.663 START TEST setup.sh 00:02:51.663 ************************************ 00:02:51.663 18:54:53 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:51.663 * Looking for test storage... 00:02:51.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:51.663 18:54:53 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:51.663 18:54:53 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:51.663 18:54:53 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:51.663 18:54:53 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:51.663 18:54:53 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:51.663 18:54:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:51.663 ************************************ 00:02:51.663 START TEST acl 00:02:51.663 ************************************ 00:02:51.663 18:54:54 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:51.663 * Looking for test storage... 00:02:51.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:51.663 18:54:54 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:51.663 18:54:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:51.663 18:54:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:51.663 18:54:54 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:51.663 18:54:54 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:51.663 18:54:54 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:51.663 18:54:54 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:51.663 18:54:54 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:51.663 18:54:54 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:51.663 18:54:54 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:51.663 18:54:54 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:51.663 18:54:54 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:51.663 18:54:54 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:51.663 18:54:54 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:51.663 18:54:54 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:51.663 18:54:54 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.961 18:54:57 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:54.961 18:54:57 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:54.961 18:54:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.961 18:54:57 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:54.961 18:54:57 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.962 18:54:57 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:57.507 Hugepages 00:02:57.507 node hugesize free / total 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.507 00:02:57.507 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.507 18:54:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:57.768 18:55:00 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:57.768 18:55:00 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:57.768 18:55:00 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.768 18:55:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:57.768 ************************************ 00:02:57.768 START TEST denied 00:02:57.768 ************************************ 00:02:57.768 18:55:00 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:57.768 18:55:00 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:57.768 18:55:00 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:57.768 18:55:00 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:57.768 18:55:00 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.768 18:55:00 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:01.064 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:01.064 18:55:03 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:01.064 18:55:03 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:01.064 18:55:03 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:01.064 18:55:03 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:01.064 18:55:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:01.064 18:55:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:01.064 18:55:03 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:01.064 18:55:03 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:01.064 18:55:03 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.064 18:55:03 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.264 00:03:05.264 real 0m7.055s 00:03:05.264 user 0m2.324s 00:03:05.264 sys 0m4.024s 00:03:05.264 18:55:07 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.264 18:55:07 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:05.264 ************************************ 00:03:05.264 END TEST denied 00:03:05.264 ************************************ 00:03:05.264 18:55:07 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:05.264 18:55:07 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:05.264 18:55:07 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.264 18:55:07 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.264 18:55:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:05.264 ************************************ 00:03:05.264 START TEST allowed 00:03:05.264 ************************************ 00:03:05.264 18:55:07 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:05.264 18:55:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:05.264 18:55:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:05.264 18:55:07 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:05.264 18:55:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.264 18:55:07 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:09.458 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:09.458 18:55:11 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:09.458 18:55:11 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:09.458 18:55:11 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:09.458 18:55:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:09.458 18:55:11 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.999 00:03:11.999 real 0m6.997s 00:03:11.999 user 0m2.217s 00:03:11.999 sys 0m3.948s 00:03:11.999 18:55:14 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.999 18:55:14 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:11.999 ************************************ 00:03:11.999 END TEST allowed 00:03:11.999 ************************************ 00:03:11.999 18:55:14 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:11.999 00:03:11.999 real 0m20.305s 00:03:11.999 user 0m6.981s 00:03:11.999 sys 0m12.004s 00:03:11.999 18:55:14 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.999 18:55:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:11.999 ************************************ 00:03:11.999 END TEST acl 00:03:11.999 ************************************ 00:03:11.999 18:55:14 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:11.999 18:55:14 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.999 18:55:14 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.999 18:55:14 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.999 18:55:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:11.999 ************************************ 00:03:11.999 START TEST hugepages 00:03:11.999 ************************************ 00:03:11.999 18:55:14 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.999 * Looking for test storage... 00:03:11.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174439656 kB' 'MemAvailable: 177197880 kB' 'Buffers: 10572 kB' 'Cached: 9009140 kB' 'SwapCached: 0 kB' 'Active: 6311548 kB' 'Inactive: 3433544 kB' 'Active(anon): 5939004 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 728804 kB' 'Mapped: 145284 kB' 'Shmem: 5213624 kB' 'KReclaimable: 193176 kB' 'Slab: 607292 kB' 'SReclaimable: 193176 kB' 'SUnreclaim: 414116 kB' 'KernelStack: 20304 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 9004116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311820 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:11.999 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.000 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:12.001 18:55:14 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:12.001 18:55:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:12.001 18:55:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.001 18:55:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:12.001 ************************************ 00:03:12.001 START TEST default_setup 00:03:12.001 ************************************ 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.001 18:55:14 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.301 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:15.301 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:15.876 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176582600 kB' 'MemAvailable: 179340272 kB' 'Buffers: 10572 kB' 'Cached: 9009236 kB' 'SwapCached: 0 kB' 'Active: 6329592 kB' 'Inactive: 3433544 kB' 'Active(anon): 5957048 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 746820 kB' 'Mapped: 145432 kB' 'Shmem: 5213720 kB' 'KReclaimable: 192072 kB' 'Slab: 604848 kB' 'SReclaimable: 192072 kB' 'SUnreclaim: 412776 kB' 'KernelStack: 20464 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9015872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 312044 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.876 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.877 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176582124 kB' 'MemAvailable: 179339796 kB' 'Buffers: 10572 kB' 'Cached: 9009252 kB' 'SwapCached: 0 kB' 'Active: 6329248 kB' 'Inactive: 3433544 kB' 'Active(anon): 5956704 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 746364 kB' 'Mapped: 145352 kB' 'Shmem: 5213736 kB' 'KReclaimable: 192072 kB' 'Slab: 604920 kB' 'SReclaimable: 192072 kB' 'SUnreclaim: 412848 kB' 'KernelStack: 20448 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9016100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 312012 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.878 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.879 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.880 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176580772 kB' 'MemAvailable: 179338444 kB' 'Buffers: 10572 kB' 'Cached: 9009268 kB' 'SwapCached: 0 kB' 'Active: 6329392 kB' 'Inactive: 3433544 kB' 'Active(anon): 5956848 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 746488 kB' 'Mapped: 145352 kB' 'Shmem: 5213752 kB' 'KReclaimable: 192072 kB' 'Slab: 604920 kB' 'SReclaimable: 192072 kB' 'SUnreclaim: 412848 kB' 'KernelStack: 20464 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9016280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311996 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:15.881 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.146 nr_hugepages=1024 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.146 resv_hugepages=0 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.146 surplus_hugepages=0 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.146 anon_hugepages=0 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176580736 kB' 'MemAvailable: 179338408 kB' 'Buffers: 10572 kB' 'Cached: 9009292 kB' 'SwapCached: 0 kB' 'Active: 6329792 kB' 'Inactive: 3433544 kB' 'Active(anon): 5957248 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 746924 kB' 'Mapped: 145352 kB' 'Shmem: 5213776 kB' 'KReclaimable: 192072 kB' 'Slab: 604920 kB' 'SReclaimable: 192072 kB' 'SUnreclaim: 412848 kB' 'KernelStack: 20496 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9016304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 312044 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.146 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.147 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85159884 kB' 'MemUsed: 12502800 kB' 'SwapCached: 0 kB' 'Active: 5180848 kB' 'Inactive: 3341228 kB' 'Active(anon): 5029492 kB' 'Inactive(anon): 0 kB' 'Active(file): 151356 kB' 'Inactive(file): 3341228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8259552 kB' 'Mapped: 79724 kB' 'AnonPages: 265696 kB' 'Shmem: 4766968 kB' 'KernelStack: 10728 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118064 kB' 'Slab: 330696 kB' 'SReclaimable: 118064 kB' 'SUnreclaim: 212632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.148 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:16.149 node0=1024 expecting 1024 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:16.149 00:03:16.149 real 0m3.974s 00:03:16.149 user 0m1.313s 00:03:16.149 sys 0m1.952s 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.149 18:55:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:16.149 ************************************ 00:03:16.149 END TEST default_setup 00:03:16.149 ************************************ 00:03:16.149 18:55:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:16.149 18:55:18 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:16.149 18:55:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.149 18:55:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.149 18:55:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.149 ************************************ 00:03:16.149 START TEST per_node_1G_alloc 00:03:16.149 ************************************ 00:03:16.149 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:16.149 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:16.149 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:16.149 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:16.149 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:16.149 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:16.149 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:16.149 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:16.149 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.150 18:55:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.692 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.692 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.692 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.692 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.956 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.956 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.956 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.956 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.956 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.956 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.956 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.956 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.956 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.956 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.956 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.956 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.956 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176623160 kB' 'MemAvailable: 179380860 kB' 'Buffers: 10572 kB' 'Cached: 9009380 kB' 'SwapCached: 0 kB' 'Active: 6332200 kB' 'Inactive: 3433544 kB' 'Active(anon): 5959656 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748612 kB' 'Mapped: 144404 kB' 'Shmem: 5213864 kB' 'KReclaimable: 192128 kB' 'Slab: 604732 kB' 'SReclaimable: 192128 kB' 'SUnreclaim: 412604 kB' 'KernelStack: 20368 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9002532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311884 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.956 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.957 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176623720 kB' 'MemAvailable: 179381420 kB' 'Buffers: 10572 kB' 'Cached: 9009384 kB' 'SwapCached: 0 kB' 'Active: 6331568 kB' 'Inactive: 3433544 kB' 'Active(anon): 5959024 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748516 kB' 'Mapped: 144304 kB' 'Shmem: 5213868 kB' 'KReclaimable: 192128 kB' 'Slab: 604880 kB' 'SReclaimable: 192128 kB' 'SUnreclaim: 412752 kB' 'KernelStack: 20368 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9002552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311836 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.958 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.959 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176622208 kB' 'MemAvailable: 179379908 kB' 'Buffers: 10572 kB' 'Cached: 9009400 kB' 'SwapCached: 0 kB' 'Active: 6331584 kB' 'Inactive: 3433544 kB' 'Active(anon): 5959040 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748516 kB' 'Mapped: 144304 kB' 'Shmem: 5213884 kB' 'KReclaimable: 192128 kB' 'Slab: 604880 kB' 'SReclaimable: 192128 kB' 'SUnreclaim: 412752 kB' 'KernelStack: 20368 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9002576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311836 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.960 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.961 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:19.225 nr_hugepages=1024 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.225 resv_hugepages=0 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.225 surplus_hugepages=0 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.225 anon_hugepages=0 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176622452 kB' 'MemAvailable: 179380152 kB' 'Buffers: 10572 kB' 'Cached: 9009420 kB' 'SwapCached: 0 kB' 'Active: 6332044 kB' 'Inactive: 3433544 kB' 'Active(anon): 5959500 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748868 kB' 'Mapped: 144304 kB' 'Shmem: 5213904 kB' 'KReclaimable: 192128 kB' 'Slab: 604880 kB' 'SReclaimable: 192128 kB' 'SUnreclaim: 412752 kB' 'KernelStack: 20352 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9002596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311836 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.225 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.226 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86258132 kB' 'MemUsed: 11404552 kB' 'SwapCached: 0 kB' 'Active: 5183096 kB' 'Inactive: 3341228 kB' 'Active(anon): 5031740 kB' 'Inactive(anon): 0 kB' 'Active(file): 151356 kB' 'Inactive(file): 3341228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8259664 kB' 'Mapped: 79420 kB' 'AnonPages: 267836 kB' 'Shmem: 4767080 kB' 'KernelStack: 10888 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118016 kB' 'Slab: 330316 kB' 'SReclaimable: 118016 kB' 'SUnreclaim: 212300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.227 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 90364824 kB' 'MemUsed: 3353644 kB' 'SwapCached: 0 kB' 'Active: 1149544 kB' 'Inactive: 92316 kB' 'Active(anon): 928356 kB' 'Inactive(anon): 0 kB' 'Active(file): 221188 kB' 'Inactive(file): 92316 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 760328 kB' 'Mapped: 64884 kB' 'AnonPages: 481660 kB' 'Shmem: 446824 kB' 'KernelStack: 9480 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74112 kB' 'Slab: 274564 kB' 'SReclaimable: 74112 kB' 'SUnreclaim: 200452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.228 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.229 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:19.230 node0=512 expecting 512 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:19.230 node1=512 expecting 512 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:19.230 00:03:19.230 real 0m3.005s 00:03:19.230 user 0m1.261s 00:03:19.230 sys 0m1.813s 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.230 18:55:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:19.230 ************************************ 00:03:19.230 END TEST per_node_1G_alloc 00:03:19.230 ************************************ 00:03:19.230 18:55:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:19.230 18:55:21 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:19.230 18:55:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.230 18:55:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.230 18:55:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.230 ************************************ 00:03:19.230 START TEST even_2G_alloc 00:03:19.230 ************************************ 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.230 18:55:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.772 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.772 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.772 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.772 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.772 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.772 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.772 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.035 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.035 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.035 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.035 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.035 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.035 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.035 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.035 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.035 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.035 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.035 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:22.035 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.035 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.035 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.035 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176625480 kB' 'MemAvailable: 179383160 kB' 'Buffers: 10572 kB' 'Cached: 9009540 kB' 'SwapCached: 0 kB' 'Active: 6335216 kB' 'Inactive: 3433544 kB' 'Active(anon): 5962672 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 751836 kB' 'Mapped: 144348 kB' 'Shmem: 5214024 kB' 'KReclaimable: 192088 kB' 'Slab: 605268 kB' 'SReclaimable: 192088 kB' 'SUnreclaim: 413180 kB' 'KernelStack: 20224 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9003084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311884 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.036 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176625812 kB' 'MemAvailable: 179383492 kB' 'Buffers: 10572 kB' 'Cached: 9009544 kB' 'SwapCached: 0 kB' 'Active: 6335116 kB' 'Inactive: 3433544 kB' 'Active(anon): 5962572 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 751748 kB' 'Mapped: 144320 kB' 'Shmem: 5214028 kB' 'KReclaimable: 192088 kB' 'Slab: 605284 kB' 'SReclaimable: 192088 kB' 'SUnreclaim: 413196 kB' 'KernelStack: 20240 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9003100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311852 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.037 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.038 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176625932 kB' 'MemAvailable: 179383612 kB' 'Buffers: 10572 kB' 'Cached: 9009560 kB' 'SwapCached: 0 kB' 'Active: 6334992 kB' 'Inactive: 3433544 kB' 'Active(anon): 5962448 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 751632 kB' 'Mapped: 144320 kB' 'Shmem: 5214044 kB' 'KReclaimable: 192088 kB' 'Slab: 605284 kB' 'SReclaimable: 192088 kB' 'SUnreclaim: 413196 kB' 'KernelStack: 20240 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9003120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311852 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.039 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.040 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.041 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.304 nr_hugepages=1024 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.304 resv_hugepages=0 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.304 surplus_hugepages=0 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.304 anon_hugepages=0 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176627332 kB' 'MemAvailable: 179385012 kB' 'Buffers: 10572 kB' 'Cached: 9009584 kB' 'SwapCached: 0 kB' 'Active: 6335648 kB' 'Inactive: 3433544 kB' 'Active(anon): 5963104 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 752304 kB' 'Mapped: 144320 kB' 'Shmem: 5214068 kB' 'KReclaimable: 192088 kB' 'Slab: 605284 kB' 'SReclaimable: 192088 kB' 'SUnreclaim: 413196 kB' 'KernelStack: 20256 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9003144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311852 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.304 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.305 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86262388 kB' 'MemUsed: 11400296 kB' 'SwapCached: 0 kB' 'Active: 5181408 kB' 'Inactive: 3341228 kB' 'Active(anon): 5030052 kB' 'Inactive(anon): 0 kB' 'Active(file): 151356 kB' 'Inactive(file): 3341228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8259756 kB' 'Mapped: 79436 kB' 'AnonPages: 265980 kB' 'Shmem: 4767172 kB' 'KernelStack: 10728 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117976 kB' 'Slab: 330592 kB' 'SReclaimable: 117976 kB' 'SUnreclaim: 212616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.306 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.307 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 90365616 kB' 'MemUsed: 3352852 kB' 'SwapCached: 0 kB' 'Active: 1154404 kB' 'Inactive: 92316 kB' 'Active(anon): 933216 kB' 'Inactive(anon): 0 kB' 'Active(file): 221188 kB' 'Inactive(file): 92316 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 760444 kB' 'Mapped: 64884 kB' 'AnonPages: 486396 kB' 'Shmem: 446940 kB' 'KernelStack: 9528 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74112 kB' 'Slab: 274692 kB' 'SReclaimable: 74112 kB' 'SUnreclaim: 200580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.308 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.309 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:22.310 node0=512 expecting 512 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:22.310 node1=512 expecting 512 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:22.310 00:03:22.310 real 0m3.027s 00:03:22.310 user 0m1.285s 00:03:22.310 sys 0m1.809s 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:22.310 18:55:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:22.310 ************************************ 00:03:22.310 END TEST even_2G_alloc 00:03:22.310 ************************************ 00:03:22.310 18:55:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:22.310 18:55:24 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:22.310 18:55:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.310 18:55:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.310 18:55:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.310 ************************************ 00:03:22.310 START TEST odd_alloc 00:03:22.310 ************************************ 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.310 18:55:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.612 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.612 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.612 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176619772 kB' 'MemAvailable: 179377452 kB' 'Buffers: 10572 kB' 'Cached: 9009700 kB' 'SwapCached: 0 kB' 'Active: 6339644 kB' 'Inactive: 3433544 kB' 'Active(anon): 5967100 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 755704 kB' 'Mapped: 144448 kB' 'Shmem: 5214184 kB' 'KReclaimable: 192088 kB' 'Slab: 605924 kB' 'SReclaimable: 192088 kB' 'SUnreclaim: 413836 kB' 'KernelStack: 20288 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9003916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311884 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.612 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.613 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176619428 kB' 'MemAvailable: 179377108 kB' 'Buffers: 10572 kB' 'Cached: 9009704 kB' 'SwapCached: 0 kB' 'Active: 6338868 kB' 'Inactive: 3433544 kB' 'Active(anon): 5966324 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 755408 kB' 'Mapped: 144336 kB' 'Shmem: 5214188 kB' 'KReclaimable: 192088 kB' 'Slab: 605940 kB' 'SReclaimable: 192088 kB' 'SUnreclaim: 413852 kB' 'KernelStack: 20272 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9003936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311884 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.614 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.615 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176619428 kB' 'MemAvailable: 179377108 kB' 'Buffers: 10572 kB' 'Cached: 9009720 kB' 'SwapCached: 0 kB' 'Active: 6338600 kB' 'Inactive: 3433544 kB' 'Active(anon): 5966056 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 755160 kB' 'Mapped: 144336 kB' 'Shmem: 5214204 kB' 'KReclaimable: 192088 kB' 'Slab: 605940 kB' 'SReclaimable: 192088 kB' 'SUnreclaim: 413852 kB' 'KernelStack: 20256 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9003956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311900 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.616 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:25.617 nr_hugepages=1025 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.617 resv_hugepages=0 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.617 surplus_hugepages=0 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.617 anon_hugepages=0 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.617 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176619932 kB' 'MemAvailable: 179377612 kB' 'Buffers: 10572 kB' 'Cached: 9009720 kB' 'SwapCached: 0 kB' 'Active: 6339144 kB' 'Inactive: 3433544 kB' 'Active(anon): 5966600 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 755696 kB' 'Mapped: 144336 kB' 'Shmem: 5214204 kB' 'KReclaimable: 192088 kB' 'Slab: 605940 kB' 'SReclaimable: 192088 kB' 'SUnreclaim: 413852 kB' 'KernelStack: 20272 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9003976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311900 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.618 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86248988 kB' 'MemUsed: 11413696 kB' 'SwapCached: 0 kB' 'Active: 5181080 kB' 'Inactive: 3341228 kB' 'Active(anon): 5029724 kB' 'Inactive(anon): 0 kB' 'Active(file): 151356 kB' 'Inactive(file): 3341228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8259760 kB' 'Mapped: 79452 kB' 'AnonPages: 265672 kB' 'Shmem: 4767176 kB' 'KernelStack: 10728 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117976 kB' 'Slab: 330600 kB' 'SReclaimable: 117976 kB' 'SUnreclaim: 212624 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.619 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 90369812 kB' 'MemUsed: 3348656 kB' 'SwapCached: 0 kB' 'Active: 1158152 kB' 'Inactive: 92316 kB' 'Active(anon): 936964 kB' 'Inactive(anon): 0 kB' 'Active(file): 221188 kB' 'Inactive(file): 92316 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 760596 kB' 'Mapped: 64884 kB' 'AnonPages: 490064 kB' 'Shmem: 447092 kB' 'KernelStack: 9544 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74112 kB' 'Slab: 275340 kB' 'SReclaimable: 74112 kB' 'SUnreclaim: 201228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.620 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.621 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:25.622 node0=512 expecting 513 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:25.622 node1=513 expecting 512 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:25.622 00:03:25.622 real 0m3.031s 00:03:25.622 user 0m1.254s 00:03:25.622 sys 0m1.836s 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.622 18:55:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.622 ************************************ 00:03:25.622 END TEST odd_alloc 00:03:25.622 ************************************ 00:03:25.622 18:55:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:25.622 18:55:27 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:25.622 18:55:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.622 18:55:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.622 18:55:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.622 ************************************ 00:03:25.622 START TEST custom_alloc 00:03:25.622 ************************************ 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.622 18:55:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.165 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.165 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.165 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.166 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.166 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.166 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.166 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.166 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.166 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.166 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.166 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.166 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.166 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.166 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.166 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.166 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.166 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175586252 kB' 'MemAvailable: 178343888 kB' 'Buffers: 10572 kB' 'Cached: 9009844 kB' 'SwapCached: 0 kB' 'Active: 6341224 kB' 'Inactive: 3433544 kB' 'Active(anon): 5968680 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 757696 kB' 'Mapped: 144396 kB' 'Shmem: 5214328 kB' 'KReclaimable: 192000 kB' 'Slab: 605916 kB' 'SReclaimable: 192000 kB' 'SUnreclaim: 413916 kB' 'KernelStack: 20240 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9004188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311852 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.166 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.432 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175588132 kB' 'MemAvailable: 178345768 kB' 'Buffers: 10572 kB' 'Cached: 9009856 kB' 'SwapCached: 0 kB' 'Active: 6342044 kB' 'Inactive: 3433544 kB' 'Active(anon): 5969500 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 758616 kB' 'Mapped: 144364 kB' 'Shmem: 5214340 kB' 'KReclaimable: 192000 kB' 'Slab: 605948 kB' 'SReclaimable: 192000 kB' 'SUnreclaim: 413948 kB' 'KernelStack: 20224 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9004580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311820 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.433 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.434 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175589120 kB' 'MemAvailable: 178346756 kB' 'Buffers: 10572 kB' 'Cached: 9009872 kB' 'SwapCached: 0 kB' 'Active: 6341648 kB' 'Inactive: 3433544 kB' 'Active(anon): 5969104 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 758216 kB' 'Mapped: 144364 kB' 'Shmem: 5214356 kB' 'KReclaimable: 192000 kB' 'Slab: 605948 kB' 'SReclaimable: 192000 kB' 'SUnreclaim: 413948 kB' 'KernelStack: 20240 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9004600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311820 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.435 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.436 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:28.437 nr_hugepages=1536 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.437 resv_hugepages=0 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.437 surplus_hugepages=0 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.437 anon_hugepages=0 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.437 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175590660 kB' 'MemAvailable: 178348296 kB' 'Buffers: 10572 kB' 'Cached: 9009896 kB' 'SwapCached: 0 kB' 'Active: 6341608 kB' 'Inactive: 3433544 kB' 'Active(anon): 5969064 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 758156 kB' 'Mapped: 144380 kB' 'Shmem: 5214380 kB' 'KReclaimable: 192000 kB' 'Slab: 605948 kB' 'SReclaimable: 192000 kB' 'SUnreclaim: 413948 kB' 'KernelStack: 20240 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9005744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311788 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.438 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.439 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86249408 kB' 'MemUsed: 11413276 kB' 'SwapCached: 0 kB' 'Active: 5180800 kB' 'Inactive: 3341228 kB' 'Active(anon): 5029444 kB' 'Inactive(anon): 0 kB' 'Active(file): 151356 kB' 'Inactive(file): 3341228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8259760 kB' 'Mapped: 79480 kB' 'AnonPages: 265536 kB' 'Shmem: 4767176 kB' 'KernelStack: 10760 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117904 kB' 'Slab: 330660 kB' 'SReclaimable: 117904 kB' 'SUnreclaim: 212756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.440 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.441 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 89340984 kB' 'MemUsed: 4377484 kB' 'SwapCached: 0 kB' 'Active: 1160736 kB' 'Inactive: 92316 kB' 'Active(anon): 939548 kB' 'Inactive(anon): 0 kB' 'Active(file): 221188 kB' 'Inactive(file): 92316 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 760748 kB' 'Mapped: 64884 kB' 'AnonPages: 492520 kB' 'Shmem: 447244 kB' 'KernelStack: 9512 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74096 kB' 'Slab: 275288 kB' 'SReclaimable: 74096 kB' 'SUnreclaim: 201192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.442 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.443 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:28.444 node0=512 expecting 512 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:28.444 node1=1024 expecting 1024 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:28.444 00:03:28.444 real 0m3.059s 00:03:28.444 user 0m1.283s 00:03:28.444 sys 0m1.837s 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.444 18:55:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:28.444 ************************************ 00:03:28.444 END TEST custom_alloc 00:03:28.444 ************************************ 00:03:28.444 18:55:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:28.444 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:28.444 18:55:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.444 18:55:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.444 18:55:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:28.444 ************************************ 00:03:28.444 START TEST no_shrink_alloc 00:03:28.444 ************************************ 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.444 18:55:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.747 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:31.747 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.747 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176646824 kB' 'MemAvailable: 179404460 kB' 'Buffers: 10572 kB' 'Cached: 9009996 kB' 'SwapCached: 0 kB' 'Active: 6346092 kB' 'Inactive: 3433544 kB' 'Active(anon): 5973548 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 761836 kB' 'Mapped: 144836 kB' 'Shmem: 5214480 kB' 'KReclaimable: 192000 kB' 'Slab: 606448 kB' 'SReclaimable: 192000 kB' 'SUnreclaim: 414448 kB' 'KernelStack: 20320 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9007704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 312044 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.747 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.748 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176648636 kB' 'MemAvailable: 179406272 kB' 'Buffers: 10572 kB' 'Cached: 9010000 kB' 'SwapCached: 0 kB' 'Active: 6345632 kB' 'Inactive: 3433544 kB' 'Active(anon): 5973088 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 761368 kB' 'Mapped: 144444 kB' 'Shmem: 5214484 kB' 'KReclaimable: 192000 kB' 'Slab: 606392 kB' 'SReclaimable: 192000 kB' 'SUnreclaim: 414392 kB' 'KernelStack: 20304 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9007720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311996 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.749 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.750 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176648064 kB' 'MemAvailable: 179405700 kB' 'Buffers: 10572 kB' 'Cached: 9010012 kB' 'SwapCached: 0 kB' 'Active: 6344816 kB' 'Inactive: 3433544 kB' 'Active(anon): 5972272 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 761500 kB' 'Mapped: 144368 kB' 'Shmem: 5214496 kB' 'KReclaimable: 192000 kB' 'Slab: 606416 kB' 'SReclaimable: 192000 kB' 'SUnreclaim: 414416 kB' 'KernelStack: 20272 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9007744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 312076 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.751 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:31.752 nr_hugepages=1024 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.752 resv_hugepages=0 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.752 surplus_hugepages=0 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.752 anon_hugepages=0 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.752 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176645836 kB' 'MemAvailable: 179403472 kB' 'Buffers: 10572 kB' 'Cached: 9010036 kB' 'SwapCached: 0 kB' 'Active: 6345012 kB' 'Inactive: 3433544 kB' 'Active(anon): 5972468 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 761156 kB' 'Mapped: 144368 kB' 'Shmem: 5214520 kB' 'KReclaimable: 192000 kB' 'Slab: 606416 kB' 'SReclaimable: 192000 kB' 'SUnreclaim: 414416 kB' 'KernelStack: 20288 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9007768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 312028 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.753 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85221264 kB' 'MemUsed: 12441420 kB' 'SwapCached: 0 kB' 'Active: 5181796 kB' 'Inactive: 3341228 kB' 'Active(anon): 5030440 kB' 'Inactive(anon): 0 kB' 'Active(file): 151356 kB' 'Inactive(file): 3341228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8259776 kB' 'Mapped: 79484 kB' 'AnonPages: 266388 kB' 'Shmem: 4767192 kB' 'KernelStack: 10936 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117904 kB' 'Slab: 331240 kB' 'SReclaimable: 117904 kB' 'SUnreclaim: 213336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.754 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.755 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:31.756 node0=1024 expecting 1024 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.756 18:55:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.298 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:34.298 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.298 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.298 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176654412 kB' 'MemAvailable: 179412060 kB' 'Buffers: 10572 kB' 'Cached: 9010272 kB' 'SwapCached: 0 kB' 'Active: 6348796 kB' 'Inactive: 3433544 kB' 'Active(anon): 5976252 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 764748 kB' 'Mapped: 144432 kB' 'Shmem: 5214756 kB' 'KReclaimable: 192024 kB' 'Slab: 606308 kB' 'SReclaimable: 192024 kB' 'SUnreclaim: 414284 kB' 'KernelStack: 20208 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9005956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311900 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.299 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176656804 kB' 'MemAvailable: 179414452 kB' 'Buffers: 10572 kB' 'Cached: 9010272 kB' 'SwapCached: 0 kB' 'Active: 6347660 kB' 'Inactive: 3433544 kB' 'Active(anon): 5975116 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 763636 kB' 'Mapped: 144376 kB' 'Shmem: 5214756 kB' 'KReclaimable: 192024 kB' 'Slab: 606280 kB' 'SReclaimable: 192024 kB' 'SUnreclaim: 414256 kB' 'KernelStack: 20224 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9005972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311852 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.300 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.301 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176656804 kB' 'MemAvailable: 179414452 kB' 'Buffers: 10572 kB' 'Cached: 9010276 kB' 'SwapCached: 0 kB' 'Active: 6347700 kB' 'Inactive: 3433544 kB' 'Active(anon): 5975156 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 763700 kB' 'Mapped: 144376 kB' 'Shmem: 5214760 kB' 'KReclaimable: 192024 kB' 'Slab: 606280 kB' 'SReclaimable: 192024 kB' 'SUnreclaim: 414256 kB' 'KernelStack: 20256 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9005996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311852 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.302 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.303 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.567 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.568 nr_hugepages=1024 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.568 resv_hugepages=0 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.568 surplus_hugepages=0 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.568 anon_hugepages=0 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 176656784 kB' 'MemAvailable: 179414432 kB' 'Buffers: 10572 kB' 'Cached: 9010316 kB' 'SwapCached: 0 kB' 'Active: 6348260 kB' 'Inactive: 3433544 kB' 'Active(anon): 5975716 kB' 'Inactive(anon): 0 kB' 'Active(file): 372544 kB' 'Inactive(file): 3433544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 764260 kB' 'Mapped: 144376 kB' 'Shmem: 5214800 kB' 'KReclaimable: 192024 kB' 'Slab: 606280 kB' 'SReclaimable: 192024 kB' 'SUnreclaim: 414256 kB' 'KernelStack: 20240 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9006016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 311852 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 492500 kB' 'DirectMap2M: 7575552 kB' 'DirectMap1G: 193986560 kB' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.568 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.569 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85232400 kB' 'MemUsed: 12430284 kB' 'SwapCached: 0 kB' 'Active: 5182880 kB' 'Inactive: 3341228 kB' 'Active(anon): 5031524 kB' 'Inactive(anon): 0 kB' 'Active(file): 151356 kB' 'Inactive(file): 3341228 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8259952 kB' 'Mapped: 79492 kB' 'AnonPages: 267296 kB' 'Shmem: 4767368 kB' 'KernelStack: 10728 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117936 kB' 'Slab: 331376 kB' 'SReclaimable: 117936 kB' 'SUnreclaim: 213440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.570 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:34.571 node0=1024 expecting 1024 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:34.571 00:03:34.571 real 0m5.943s 00:03:34.571 user 0m2.476s 00:03:34.571 sys 0m3.597s 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.571 18:55:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.571 ************************************ 00:03:34.571 END TEST no_shrink_alloc 00:03:34.571 ************************************ 00:03:34.571 18:55:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:34.571 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:34.571 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:34.571 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.571 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.571 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.571 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.571 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.571 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.571 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.571 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.571 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.571 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.571 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:34.571 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:34.571 00:03:34.571 real 0m22.596s 00:03:34.571 user 0m9.109s 00:03:34.571 sys 0m13.201s 00:03:34.571 18:55:36 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.571 18:55:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.571 ************************************ 00:03:34.571 END TEST hugepages 00:03:34.571 ************************************ 00:03:34.571 18:55:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:34.571 18:55:37 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:34.571 18:55:37 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.571 18:55:37 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.571 18:55:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:34.571 ************************************ 00:03:34.571 START TEST driver 00:03:34.571 ************************************ 00:03:34.571 18:55:37 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:34.571 * Looking for test storage... 00:03:34.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:34.832 18:55:37 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:34.832 18:55:37 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.832 18:55:37 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.035 18:55:41 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:39.035 18:55:41 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.035 18:55:41 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.035 18:55:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:39.035 ************************************ 00:03:39.035 START TEST guess_driver 00:03:39.035 ************************************ 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:39.035 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:39.035 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:39.035 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:39.035 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:39.035 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:39.035 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:39.035 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:39.035 Looking for driver=vfio-pci 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.035 18:55:41 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:41.572 18:55:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.572 18:55:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.572 18:55:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.572 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:41.832 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.402 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.402 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.402 18:55:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.662 18:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:42.662 18:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:42.662 18:55:45 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.662 18:55:45 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.859 00:03:46.859 real 0m7.865s 00:03:46.859 user 0m2.360s 00:03:46.859 sys 0m4.009s 00:03:46.859 18:55:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.859 18:55:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:46.859 ************************************ 00:03:46.859 END TEST guess_driver 00:03:46.859 ************************************ 00:03:46.859 18:55:49 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:46.859 00:03:46.859 real 0m12.092s 00:03:46.859 user 0m3.582s 00:03:46.859 sys 0m6.214s 00:03:46.859 18:55:49 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.859 18:55:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:46.859 ************************************ 00:03:46.859 END TEST driver 00:03:46.859 ************************************ 00:03:46.859 18:55:49 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:46.859 18:55:49 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:46.859 18:55:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.859 18:55:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.859 18:55:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.859 ************************************ 00:03:46.859 START TEST devices 00:03:46.859 ************************************ 00:03:46.859 18:55:49 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:46.859 * Looking for test storage... 00:03:46.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:46.859 18:55:49 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:46.859 18:55:49 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:46.859 18:55:49 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.859 18:55:49 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.153 18:55:52 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:50.153 18:55:52 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:50.153 18:55:52 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:50.153 18:55:52 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:50.153 18:55:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.153 18:55:52 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:50.153 18:55:52 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:50.153 18:55:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:50.153 18:55:52 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.153 18:55:52 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:50.153 18:55:52 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:50.153 18:55:52 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:50.153 18:55:52 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:50.153 18:55:52 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:50.153 18:55:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.153 18:55:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:50.153 18:55:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.153 18:55:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:50.154 18:55:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:50.154 18:55:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:50.154 18:55:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:50.154 18:55:52 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:50.154 No valid GPT data, bailing 00:03:50.154 18:55:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:50.154 18:55:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.154 18:55:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.154 18:55:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:50.154 18:55:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:50.154 18:55:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:50.154 18:55:52 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:50.154 18:55:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:50.154 18:55:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.154 18:55:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:50.154 18:55:52 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:50.154 18:55:52 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:50.154 18:55:52 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:50.154 18:55:52 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.154 18:55:52 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.154 18:55:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:50.154 ************************************ 00:03:50.154 START TEST nvme_mount 00:03:50.154 ************************************ 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:50.154 18:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:51.094 Creating new GPT entries in memory. 00:03:51.094 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:51.094 other utilities. 00:03:51.094 18:55:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:51.094 18:55:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.094 18:55:53 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:51.094 18:55:53 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:51.094 18:55:53 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:52.030 Creating new GPT entries in memory. 00:03:52.030 The operation has completed successfully. 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 102467 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.289 18:55:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.820 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.080 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.080 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:55.080 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.080 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.080 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.080 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:55.080 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.080 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.080 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.080 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:55.080 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:55.080 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.080 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:55.338 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:55.338 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:55.338 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:55.338 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:55.338 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:55.338 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:55.338 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.338 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:55.338 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:55.597 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.598 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.598 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:55.598 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:55.598 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.598 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.598 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:55.598 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.598 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:55.598 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:55.598 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.598 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:55.598 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:55.598 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.598 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.137 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:58.138 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.398 18:56:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.938 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.198 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.198 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:01.198 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:01.198 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:01.198 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.198 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.198 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.198 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.198 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.198 00:04:01.198 real 0m11.105s 00:04:01.198 user 0m3.272s 00:04:01.198 sys 0m5.635s 00:04:01.198 18:56:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.198 18:56:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:01.198 ************************************ 00:04:01.198 END TEST nvme_mount 00:04:01.198 ************************************ 00:04:01.198 18:56:03 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:01.198 18:56:03 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:01.198 18:56:03 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.198 18:56:03 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.198 18:56:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:01.198 ************************************ 00:04:01.198 START TEST dm_mount 00:04:01.198 ************************************ 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.198 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.199 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.199 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:01.199 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:01.199 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:01.199 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:01.199 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:01.199 18:56:03 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:02.578 Creating new GPT entries in memory. 00:04:02.578 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:02.578 other utilities. 00:04:02.578 18:56:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:02.578 18:56:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.578 18:56:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:02.578 18:56:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:02.578 18:56:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:03.518 Creating new GPT entries in memory. 00:04:03.518 The operation has completed successfully. 00:04:03.518 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:03.518 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.518 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:03.518 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:03.518 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:04.455 The operation has completed successfully. 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 106654 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.455 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.456 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.994 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.994 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:06.994 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:06.994 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.994 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.994 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:06.995 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.254 18:56:09 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:10.544 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:10.544 00:04:10.544 real 0m8.909s 00:04:10.544 user 0m2.139s 00:04:10.544 sys 0m3.772s 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.544 18:56:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:10.544 ************************************ 00:04:10.544 END TEST dm_mount 00:04:10.544 ************************************ 00:04:10.544 18:56:12 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:10.544 18:56:12 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:10.544 18:56:12 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:10.544 18:56:12 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.544 18:56:12 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.544 18:56:12 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:10.544 18:56:12 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.544 18:56:12 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:10.544 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:10.544 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:10.544 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:10.544 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:10.544 18:56:12 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:10.544 18:56:12 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:10.544 18:56:12 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.544 18:56:12 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.544 18:56:12 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.544 18:56:12 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.544 18:56:12 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:10.544 00:04:10.544 real 0m23.716s 00:04:10.544 user 0m6.732s 00:04:10.544 sys 0m11.667s 00:04:10.544 18:56:12 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.544 18:56:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:10.544 ************************************ 00:04:10.544 END TEST devices 00:04:10.544 ************************************ 00:04:10.544 18:56:12 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:10.544 00:04:10.544 real 1m19.092s 00:04:10.544 user 0m26.549s 00:04:10.544 sys 0m43.353s 00:04:10.544 18:56:12 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.544 18:56:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.544 ************************************ 00:04:10.544 END TEST setup.sh 00:04:10.544 ************************************ 00:04:10.544 18:56:12 -- common/autotest_common.sh@1142 -- # return 0 00:04:10.544 18:56:12 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:13.132 Hugepages 00:04:13.132 node hugesize free / total 00:04:13.132 node0 1048576kB 0 / 0 00:04:13.132 node0 2048kB 2048 / 2048 00:04:13.392 node1 1048576kB 0 / 0 00:04:13.392 node1 2048kB 0 / 0 00:04:13.392 00:04:13.392 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:13.392 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:13.392 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:13.392 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:13.392 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:13.392 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:13.392 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:13.392 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:13.392 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:13.392 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:13.392 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:13.392 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:13.392 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:13.392 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:13.392 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:13.392 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:13.392 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:13.392 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:13.392 18:56:15 -- spdk/autotest.sh@130 -- # uname -s 00:04:13.392 18:56:15 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:13.392 18:56:15 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:13.392 18:56:15 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.689 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:16.689 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:17.261 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:17.261 18:56:19 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:18.200 18:56:20 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:18.200 18:56:20 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:18.200 18:56:20 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:18.200 18:56:20 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:18.200 18:56:20 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:18.200 18:56:20 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:18.200 18:56:20 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.200 18:56:20 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:18.200 18:56:20 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:18.200 18:56:20 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:18.200 18:56:20 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:18.200 18:56:20 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.495 Waiting for block devices as requested 00:04:21.495 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:21.495 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:21.495 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:21.495 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:21.495 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:21.495 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:21.495 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:21.495 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:21.754 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:21.754 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:21.755 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:22.014 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:22.014 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:22.014 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:22.014 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:22.275 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:22.275 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:22.275 18:56:24 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:22.275 18:56:24 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:22.275 18:56:24 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:22.275 18:56:24 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:22.275 18:56:24 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:22.275 18:56:24 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:22.275 18:56:24 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:22.275 18:56:24 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:22.275 18:56:24 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:22.275 18:56:24 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:22.275 18:56:24 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:22.275 18:56:24 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:22.275 18:56:24 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:22.275 18:56:24 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:22.275 18:56:24 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:22.275 18:56:24 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:22.275 18:56:24 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:22.275 18:56:24 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:22.275 18:56:24 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:22.275 18:56:24 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:22.275 18:56:24 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:22.275 18:56:24 -- common/autotest_common.sh@1557 -- # continue 00:04:22.275 18:56:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:22.275 18:56:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:22.275 18:56:24 -- common/autotest_common.sh@10 -- # set +x 00:04:22.535 18:56:24 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:22.535 18:56:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.535 18:56:24 -- common/autotest_common.sh@10 -- # set +x 00:04:22.535 18:56:24 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:25.075 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:25.075 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:25.075 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:25.075 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:25.075 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:25.075 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:25.335 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:25.335 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:25.335 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:25.335 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:25.335 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:25.335 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:25.335 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:25.335 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:25.335 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:25.335 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.277 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:26.277 18:56:28 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:26.277 18:56:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:26.277 18:56:28 -- common/autotest_common.sh@10 -- # set +x 00:04:26.277 18:56:28 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:26.277 18:56:28 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:26.277 18:56:28 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:26.277 18:56:28 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:26.277 18:56:28 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:26.277 18:56:28 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:26.277 18:56:28 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:26.277 18:56:28 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:26.277 18:56:28 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:26.277 18:56:28 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:26.277 18:56:28 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:26.277 18:56:28 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:26.277 18:56:28 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:26.277 18:56:28 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:26.277 18:56:28 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:26.277 18:56:28 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:26.277 18:56:28 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:26.277 18:56:28 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:26.277 18:56:28 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:26.277 18:56:28 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:26.277 18:56:28 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=115452 00:04:26.277 18:56:28 -- common/autotest_common.sh@1598 -- # waitforlisten 115452 00:04:26.277 18:56:28 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.277 18:56:28 -- common/autotest_common.sh@829 -- # '[' -z 115452 ']' 00:04:26.277 18:56:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.277 18:56:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:26.277 18:56:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.277 18:56:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:26.277 18:56:28 -- common/autotest_common.sh@10 -- # set +x 00:04:26.537 [2024-07-12 18:56:28.862344] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:04:26.537 [2024-07-12 18:56:28.862395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115452 ] 00:04:26.537 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.537 [2024-07-12 18:56:28.931576] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.537 [2024-07-12 18:56:29.005854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.107 18:56:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.107 18:56:29 -- common/autotest_common.sh@862 -- # return 0 00:04:27.107 18:56:29 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:27.107 18:56:29 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:27.107 18:56:29 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:30.404 nvme0n1 00:04:30.404 18:56:32 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:30.404 [2024-07-12 18:56:32.807304] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:30.404 request: 00:04:30.404 { 00:04:30.404 "nvme_ctrlr_name": "nvme0", 00:04:30.404 "password": "test", 00:04:30.404 "method": "bdev_nvme_opal_revert", 00:04:30.404 "req_id": 1 00:04:30.404 } 00:04:30.404 Got JSON-RPC error response 00:04:30.404 response: 00:04:30.404 { 00:04:30.404 "code": -32602, 00:04:30.404 "message": "Invalid parameters" 00:04:30.404 } 00:04:30.404 18:56:32 -- common/autotest_common.sh@1604 -- # true 00:04:30.404 18:56:32 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:30.404 18:56:32 -- common/autotest_common.sh@1608 -- # killprocess 115452 00:04:30.404 18:56:32 -- common/autotest_common.sh@948 -- # '[' -z 115452 ']' 00:04:30.404 18:56:32 -- common/autotest_common.sh@952 -- # kill -0 115452 00:04:30.404 18:56:32 -- common/autotest_common.sh@953 -- # uname 00:04:30.404 18:56:32 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:30.404 18:56:32 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115452 00:04:30.404 18:56:32 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:30.404 18:56:32 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:30.404 18:56:32 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115452' 00:04:30.404 killing process with pid 115452 00:04:30.404 18:56:32 -- common/autotest_common.sh@967 -- # kill 115452 00:04:30.404 18:56:32 -- common/autotest_common.sh@972 -- # wait 115452 00:04:32.306 18:56:34 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:32.306 18:56:34 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:32.306 18:56:34 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:32.306 18:56:34 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:32.306 18:56:34 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:32.306 18:56:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:32.306 18:56:34 -- common/autotest_common.sh@10 -- # set +x 00:04:32.306 18:56:34 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:32.306 18:56:34 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:32.306 18:56:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.306 18:56:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.306 18:56:34 -- common/autotest_common.sh@10 -- # set +x 00:04:32.306 ************************************ 00:04:32.306 START TEST env 00:04:32.306 ************************************ 00:04:32.306 18:56:34 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:32.306 * Looking for test storage... 00:04:32.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:32.306 18:56:34 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:32.306 18:56:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.306 18:56:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.306 18:56:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.306 ************************************ 00:04:32.306 START TEST env_memory 00:04:32.306 ************************************ 00:04:32.306 18:56:34 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:32.306 00:04:32.306 00:04:32.306 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.306 http://cunit.sourceforge.net/ 00:04:32.306 00:04:32.306 00:04:32.306 Suite: memory 00:04:32.306 Test: alloc and free memory map ...[2024-07-12 18:56:34.679689] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:32.306 passed 00:04:32.306 Test: mem map translation ...[2024-07-12 18:56:34.698557] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:32.306 [2024-07-12 18:56:34.698570] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:32.306 [2024-07-12 18:56:34.698606] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:32.306 [2024-07-12 18:56:34.698612] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:32.306 passed 00:04:32.306 Test: mem map registration ...[2024-07-12 18:56:34.735133] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:32.306 [2024-07-12 18:56:34.735150] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:32.306 passed 00:04:32.307 Test: mem map adjacent registrations ...passed 00:04:32.307 00:04:32.307 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.307 suites 1 1 n/a 0 0 00:04:32.307 tests 4 4 4 0 0 00:04:32.307 asserts 152 152 152 0 n/a 00:04:32.307 00:04:32.307 Elapsed time = 0.125 seconds 00:04:32.307 00:04:32.307 real 0m0.134s 00:04:32.307 user 0m0.128s 00:04:32.307 sys 0m0.004s 00:04:32.307 18:56:34 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.307 18:56:34 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:32.307 ************************************ 00:04:32.307 END TEST env_memory 00:04:32.307 ************************************ 00:04:32.307 18:56:34 env -- common/autotest_common.sh@1142 -- # return 0 00:04:32.307 18:56:34 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:32.307 18:56:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.307 18:56:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.307 18:56:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.307 ************************************ 00:04:32.307 START TEST env_vtophys 00:04:32.307 ************************************ 00:04:32.307 18:56:34 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:32.307 EAL: lib.eal log level changed from notice to debug 00:04:32.307 EAL: Detected lcore 0 as core 0 on socket 0 00:04:32.307 EAL: Detected lcore 1 as core 1 on socket 0 00:04:32.307 EAL: Detected lcore 2 as core 2 on socket 0 00:04:32.307 EAL: Detected lcore 3 as core 3 on socket 0 00:04:32.307 EAL: Detected lcore 4 as core 4 on socket 0 00:04:32.307 EAL: Detected lcore 5 as core 5 on socket 0 00:04:32.307 EAL: Detected lcore 6 as core 6 on socket 0 00:04:32.307 EAL: Detected lcore 7 as core 8 on socket 0 00:04:32.307 EAL: Detected lcore 8 as core 9 on socket 0 00:04:32.307 EAL: Detected lcore 9 as core 10 on socket 0 00:04:32.307 EAL: Detected lcore 10 as core 11 on socket 0 00:04:32.307 EAL: Detected lcore 11 as core 12 on socket 0 00:04:32.307 EAL: Detected lcore 12 as core 13 on socket 0 00:04:32.307 EAL: Detected lcore 13 as core 16 on socket 0 00:04:32.307 EAL: Detected lcore 14 as core 17 on socket 0 00:04:32.307 EAL: Detected lcore 15 as core 18 on socket 0 00:04:32.307 EAL: Detected lcore 16 as core 19 on socket 0 00:04:32.307 EAL: Detected lcore 17 as core 20 on socket 0 00:04:32.307 EAL: Detected lcore 18 as core 21 on socket 0 00:04:32.307 EAL: Detected lcore 19 as core 25 on socket 0 00:04:32.307 EAL: Detected lcore 20 as core 26 on socket 0 00:04:32.307 EAL: Detected lcore 21 as core 27 on socket 0 00:04:32.307 EAL: Detected lcore 22 as core 28 on socket 0 00:04:32.307 EAL: Detected lcore 23 as core 29 on socket 0 00:04:32.307 EAL: Detected lcore 24 as core 0 on socket 1 00:04:32.307 EAL: Detected lcore 25 as core 1 on socket 1 00:04:32.307 EAL: Detected lcore 26 as core 2 on socket 1 00:04:32.307 EAL: Detected lcore 27 as core 3 on socket 1 00:04:32.307 EAL: Detected lcore 28 as core 4 on socket 1 00:04:32.307 EAL: Detected lcore 29 as core 5 on socket 1 00:04:32.307 EAL: Detected lcore 30 as core 6 on socket 1 00:04:32.307 EAL: Detected lcore 31 as core 9 on socket 1 00:04:32.307 EAL: Detected lcore 32 as core 10 on socket 1 00:04:32.307 EAL: Detected lcore 33 as core 11 on socket 1 00:04:32.307 EAL: Detected lcore 34 as core 12 on socket 1 00:04:32.307 EAL: Detected lcore 35 as core 13 on socket 1 00:04:32.307 EAL: Detected lcore 36 as core 16 on socket 1 00:04:32.307 EAL: Detected lcore 37 as core 17 on socket 1 00:04:32.307 EAL: Detected lcore 38 as core 18 on socket 1 00:04:32.307 EAL: Detected lcore 39 as core 19 on socket 1 00:04:32.307 EAL: Detected lcore 40 as core 20 on socket 1 00:04:32.307 EAL: Detected lcore 41 as core 21 on socket 1 00:04:32.307 EAL: Detected lcore 42 as core 24 on socket 1 00:04:32.307 EAL: Detected lcore 43 as core 25 on socket 1 00:04:32.307 EAL: Detected lcore 44 as core 26 on socket 1 00:04:32.307 EAL: Detected lcore 45 as core 27 on socket 1 00:04:32.307 EAL: Detected lcore 46 as core 28 on socket 1 00:04:32.307 EAL: Detected lcore 47 as core 29 on socket 1 00:04:32.307 EAL: Detected lcore 48 as core 0 on socket 0 00:04:32.307 EAL: Detected lcore 49 as core 1 on socket 0 00:04:32.307 EAL: Detected lcore 50 as core 2 on socket 0 00:04:32.307 EAL: Detected lcore 51 as core 3 on socket 0 00:04:32.307 EAL: Detected lcore 52 as core 4 on socket 0 00:04:32.307 EAL: Detected lcore 53 as core 5 on socket 0 00:04:32.307 EAL: Detected lcore 54 as core 6 on socket 0 00:04:32.307 EAL: Detected lcore 55 as core 8 on socket 0 00:04:32.307 EAL: Detected lcore 56 as core 9 on socket 0 00:04:32.307 EAL: Detected lcore 57 as core 10 on socket 0 00:04:32.307 EAL: Detected lcore 58 as core 11 on socket 0 00:04:32.307 EAL: Detected lcore 59 as core 12 on socket 0 00:04:32.307 EAL: Detected lcore 60 as core 13 on socket 0 00:04:32.307 EAL: Detected lcore 61 as core 16 on socket 0 00:04:32.307 EAL: Detected lcore 62 as core 17 on socket 0 00:04:32.307 EAL: Detected lcore 63 as core 18 on socket 0 00:04:32.307 EAL: Detected lcore 64 as core 19 on socket 0 00:04:32.307 EAL: Detected lcore 65 as core 20 on socket 0 00:04:32.307 EAL: Detected lcore 66 as core 21 on socket 0 00:04:32.307 EAL: Detected lcore 67 as core 25 on socket 0 00:04:32.307 EAL: Detected lcore 68 as core 26 on socket 0 00:04:32.307 EAL: Detected lcore 69 as core 27 on socket 0 00:04:32.307 EAL: Detected lcore 70 as core 28 on socket 0 00:04:32.307 EAL: Detected lcore 71 as core 29 on socket 0 00:04:32.307 EAL: Detected lcore 72 as core 0 on socket 1 00:04:32.307 EAL: Detected lcore 73 as core 1 on socket 1 00:04:32.307 EAL: Detected lcore 74 as core 2 on socket 1 00:04:32.307 EAL: Detected lcore 75 as core 3 on socket 1 00:04:32.307 EAL: Detected lcore 76 as core 4 on socket 1 00:04:32.307 EAL: Detected lcore 77 as core 5 on socket 1 00:04:32.307 EAL: Detected lcore 78 as core 6 on socket 1 00:04:32.307 EAL: Detected lcore 79 as core 9 on socket 1 00:04:32.307 EAL: Detected lcore 80 as core 10 on socket 1 00:04:32.307 EAL: Detected lcore 81 as core 11 on socket 1 00:04:32.307 EAL: Detected lcore 82 as core 12 on socket 1 00:04:32.307 EAL: Detected lcore 83 as core 13 on socket 1 00:04:32.307 EAL: Detected lcore 84 as core 16 on socket 1 00:04:32.307 EAL: Detected lcore 85 as core 17 on socket 1 00:04:32.307 EAL: Detected lcore 86 as core 18 on socket 1 00:04:32.307 EAL: Detected lcore 87 as core 19 on socket 1 00:04:32.307 EAL: Detected lcore 88 as core 20 on socket 1 00:04:32.307 EAL: Detected lcore 89 as core 21 on socket 1 00:04:32.307 EAL: Detected lcore 90 as core 24 on socket 1 00:04:32.307 EAL: Detected lcore 91 as core 25 on socket 1 00:04:32.307 EAL: Detected lcore 92 as core 26 on socket 1 00:04:32.307 EAL: Detected lcore 93 as core 27 on socket 1 00:04:32.307 EAL: Detected lcore 94 as core 28 on socket 1 00:04:32.307 EAL: Detected lcore 95 as core 29 on socket 1 00:04:32.567 EAL: Maximum logical cores by configuration: 128 00:04:32.567 EAL: Detected CPU lcores: 96 00:04:32.567 EAL: Detected NUMA nodes: 2 00:04:32.567 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:32.567 EAL: Detected shared linkage of DPDK 00:04:32.567 EAL: No shared files mode enabled, IPC will be disabled 00:04:32.567 EAL: Bus pci wants IOVA as 'DC' 00:04:32.567 EAL: Buses did not request a specific IOVA mode. 00:04:32.567 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:32.567 EAL: Selected IOVA mode 'VA' 00:04:32.567 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.567 EAL: Probing VFIO support... 00:04:32.567 EAL: IOMMU type 1 (Type 1) is supported 00:04:32.567 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:32.567 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:32.567 EAL: VFIO support initialized 00:04:32.567 EAL: Ask a virtual area of 0x2e000 bytes 00:04:32.567 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:32.567 EAL: Setting up physically contiguous memory... 00:04:32.567 EAL: Setting maximum number of open files to 524288 00:04:32.567 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:32.567 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:32.567 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:32.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.567 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:32.567 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.567 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:32.567 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:32.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.567 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:32.567 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.567 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:32.567 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:32.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.567 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:32.567 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.567 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:32.567 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:32.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.567 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:32.567 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.567 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:32.567 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:32.567 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:32.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.567 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:32.567 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.567 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:32.567 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:32.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.567 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:32.567 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.567 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:32.567 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:32.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.567 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:32.567 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.567 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:32.567 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:32.567 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.567 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:32.567 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:32.567 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.567 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:32.567 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:32.567 EAL: Hugepages will be freed exactly as allocated. 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: TSC frequency is ~2300000 KHz 00:04:32.567 EAL: Main lcore 0 is ready (tid=7fd01085ea00;cpuset=[0]) 00:04:32.567 EAL: Trying to obtain current memory policy. 00:04:32.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.567 EAL: Restoring previous memory policy: 0 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was expanded by 2MB 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:32.567 EAL: Mem event callback 'spdk:(nil)' registered 00:04:32.567 00:04:32.567 00:04:32.567 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.567 http://cunit.sourceforge.net/ 00:04:32.567 00:04:32.567 00:04:32.567 Suite: components_suite 00:04:32.567 Test: vtophys_malloc_test ...passed 00:04:32.567 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:32.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.567 EAL: Restoring previous memory policy: 4 00:04:32.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was expanded by 4MB 00:04:32.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was shrunk by 4MB 00:04:32.567 EAL: Trying to obtain current memory policy. 00:04:32.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.567 EAL: Restoring previous memory policy: 4 00:04:32.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was expanded by 6MB 00:04:32.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was shrunk by 6MB 00:04:32.567 EAL: Trying to obtain current memory policy. 00:04:32.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.567 EAL: Restoring previous memory policy: 4 00:04:32.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was expanded by 10MB 00:04:32.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was shrunk by 10MB 00:04:32.567 EAL: Trying to obtain current memory policy. 00:04:32.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.567 EAL: Restoring previous memory policy: 4 00:04:32.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was expanded by 18MB 00:04:32.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was shrunk by 18MB 00:04:32.567 EAL: Trying to obtain current memory policy. 00:04:32.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.567 EAL: Restoring previous memory policy: 4 00:04:32.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was expanded by 34MB 00:04:32.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was shrunk by 34MB 00:04:32.567 EAL: Trying to obtain current memory policy. 00:04:32.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.567 EAL: Restoring previous memory policy: 4 00:04:32.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was expanded by 66MB 00:04:32.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was shrunk by 66MB 00:04:32.567 EAL: Trying to obtain current memory policy. 00:04:32.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.567 EAL: Restoring previous memory policy: 4 00:04:32.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was expanded by 130MB 00:04:32.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.567 EAL: request: mp_malloc_sync 00:04:32.567 EAL: No shared files mode enabled, IPC is disabled 00:04:32.567 EAL: Heap on socket 0 was shrunk by 130MB 00:04:32.567 EAL: Trying to obtain current memory policy. 00:04:32.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.568 EAL: Restoring previous memory policy: 4 00:04:32.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.568 EAL: request: mp_malloc_sync 00:04:32.568 EAL: No shared files mode enabled, IPC is disabled 00:04:32.568 EAL: Heap on socket 0 was expanded by 258MB 00:04:32.827 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.827 EAL: request: mp_malloc_sync 00:04:32.827 EAL: No shared files mode enabled, IPC is disabled 00:04:32.827 EAL: Heap on socket 0 was shrunk by 258MB 00:04:32.827 EAL: Trying to obtain current memory policy. 00:04:32.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.827 EAL: Restoring previous memory policy: 4 00:04:32.827 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.827 EAL: request: mp_malloc_sync 00:04:32.827 EAL: No shared files mode enabled, IPC is disabled 00:04:32.827 EAL: Heap on socket 0 was expanded by 514MB 00:04:32.827 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.087 EAL: request: mp_malloc_sync 00:04:33.087 EAL: No shared files mode enabled, IPC is disabled 00:04:33.087 EAL: Heap on socket 0 was shrunk by 514MB 00:04:33.087 EAL: Trying to obtain current memory policy. 00:04:33.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.087 EAL: Restoring previous memory policy: 4 00:04:33.087 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.087 EAL: request: mp_malloc_sync 00:04:33.087 EAL: No shared files mode enabled, IPC is disabled 00:04:33.087 EAL: Heap on socket 0 was expanded by 1026MB 00:04:33.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.607 EAL: request: mp_malloc_sync 00:04:33.607 EAL: No shared files mode enabled, IPC is disabled 00:04:33.607 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:33.607 passed 00:04:33.607 00:04:33.607 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.607 suites 1 1 n/a 0 0 00:04:33.607 tests 2 2 2 0 0 00:04:33.607 asserts 497 497 497 0 n/a 00:04:33.607 00:04:33.607 Elapsed time = 0.979 seconds 00:04:33.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.607 EAL: request: mp_malloc_sync 00:04:33.607 EAL: No shared files mode enabled, IPC is disabled 00:04:33.607 EAL: Heap on socket 0 was shrunk by 2MB 00:04:33.607 EAL: No shared files mode enabled, IPC is disabled 00:04:33.607 EAL: No shared files mode enabled, IPC is disabled 00:04:33.607 EAL: No shared files mode enabled, IPC is disabled 00:04:33.607 00:04:33.607 real 0m1.104s 00:04:33.607 user 0m0.643s 00:04:33.607 sys 0m0.434s 00:04:33.607 18:56:35 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.607 18:56:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:33.607 ************************************ 00:04:33.607 END TEST env_vtophys 00:04:33.607 ************************************ 00:04:33.607 18:56:35 env -- common/autotest_common.sh@1142 -- # return 0 00:04:33.607 18:56:35 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.607 18:56:35 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.607 18:56:35 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.607 18:56:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.607 ************************************ 00:04:33.607 START TEST env_pci 00:04:33.607 ************************************ 00:04:33.607 18:56:36 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:33.607 00:04:33.607 00:04:33.607 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.607 http://cunit.sourceforge.net/ 00:04:33.607 00:04:33.607 00:04:33.607 Suite: pci 00:04:33.607 Test: pci_hook ...[2024-07-12 18:56:36.040750] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 116764 has claimed it 00:04:33.607 EAL: Cannot find device (10000:00:01.0) 00:04:33.607 EAL: Failed to attach device on primary process 00:04:33.607 passed 00:04:33.607 00:04:33.607 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.607 suites 1 1 n/a 0 0 00:04:33.607 tests 1 1 1 0 0 00:04:33.607 asserts 25 25 25 0 n/a 00:04:33.607 00:04:33.607 Elapsed time = 0.028 seconds 00:04:33.607 00:04:33.607 real 0m0.048s 00:04:33.607 user 0m0.012s 00:04:33.607 sys 0m0.035s 00:04:33.607 18:56:36 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.607 18:56:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:33.607 ************************************ 00:04:33.607 END TEST env_pci 00:04:33.607 ************************************ 00:04:33.607 18:56:36 env -- common/autotest_common.sh@1142 -- # return 0 00:04:33.607 18:56:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:33.607 18:56:36 env -- env/env.sh@15 -- # uname 00:04:33.607 18:56:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:33.607 18:56:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:33.607 18:56:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.607 18:56:36 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:33.607 18:56:36 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.607 18:56:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.607 ************************************ 00:04:33.607 START TEST env_dpdk_post_init 00:04:33.607 ************************************ 00:04:33.607 18:56:36 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.607 EAL: Detected CPU lcores: 96 00:04:33.607 EAL: Detected NUMA nodes: 2 00:04:33.607 EAL: Detected shared linkage of DPDK 00:04:33.867 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.867 EAL: Selected IOVA mode 'VA' 00:04:33.867 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.867 EAL: VFIO support initialized 00:04:33.867 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.867 EAL: Using IOMMU type 1 (Type 1) 00:04:33.867 EAL: Ignore mapping IO port bar(1) 00:04:33.867 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:33.867 EAL: Ignore mapping IO port bar(1) 00:04:33.867 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:33.867 EAL: Ignore mapping IO port bar(1) 00:04:33.867 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:33.867 EAL: Ignore mapping IO port bar(1) 00:04:33.867 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:33.867 EAL: Ignore mapping IO port bar(1) 00:04:33.867 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:33.867 EAL: Ignore mapping IO port bar(1) 00:04:33.867 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:33.867 EAL: Ignore mapping IO port bar(1) 00:04:33.867 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:33.867 EAL: Ignore mapping IO port bar(1) 00:04:33.867 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:34.808 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:34.808 EAL: Ignore mapping IO port bar(1) 00:04:34.808 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:34.808 EAL: Ignore mapping IO port bar(1) 00:04:34.808 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:34.808 EAL: Ignore mapping IO port bar(1) 00:04:34.808 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:34.808 EAL: Ignore mapping IO port bar(1) 00:04:34.808 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:34.808 EAL: Ignore mapping IO port bar(1) 00:04:34.808 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:34.808 EAL: Ignore mapping IO port bar(1) 00:04:34.808 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:34.808 EAL: Ignore mapping IO port bar(1) 00:04:34.808 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:34.808 EAL: Ignore mapping IO port bar(1) 00:04:34.808 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:38.106 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:38.106 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:38.106 Starting DPDK initialization... 00:04:38.106 Starting SPDK post initialization... 00:04:38.106 SPDK NVMe probe 00:04:38.106 Attaching to 0000:5e:00.0 00:04:38.106 Attached to 0000:5e:00.0 00:04:38.106 Cleaning up... 00:04:38.106 00:04:38.106 real 0m4.324s 00:04:38.106 user 0m3.286s 00:04:38.106 sys 0m0.117s 00:04:38.106 18:56:40 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.106 18:56:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.106 ************************************ 00:04:38.106 END TEST env_dpdk_post_init 00:04:38.106 ************************************ 00:04:38.106 18:56:40 env -- common/autotest_common.sh@1142 -- # return 0 00:04:38.106 18:56:40 env -- env/env.sh@26 -- # uname 00:04:38.106 18:56:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:38.106 18:56:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.106 18:56:40 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.106 18:56:40 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.106 18:56:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.106 ************************************ 00:04:38.106 START TEST env_mem_callbacks 00:04:38.106 ************************************ 00:04:38.106 18:56:40 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.106 EAL: Detected CPU lcores: 96 00:04:38.106 EAL: Detected NUMA nodes: 2 00:04:38.106 EAL: Detected shared linkage of DPDK 00:04:38.106 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.106 EAL: Selected IOVA mode 'VA' 00:04:38.106 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.106 EAL: VFIO support initialized 00:04:38.106 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.106 00:04:38.106 00:04:38.106 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.106 http://cunit.sourceforge.net/ 00:04:38.106 00:04:38.106 00:04:38.106 Suite: memory 00:04:38.106 Test: test ... 00:04:38.106 register 0x200000200000 2097152 00:04:38.106 malloc 3145728 00:04:38.106 register 0x200000400000 4194304 00:04:38.106 buf 0x200000500000 len 3145728 PASSED 00:04:38.106 malloc 64 00:04:38.106 buf 0x2000004fff40 len 64 PASSED 00:04:38.106 malloc 4194304 00:04:38.106 register 0x200000800000 6291456 00:04:38.106 buf 0x200000a00000 len 4194304 PASSED 00:04:38.106 free 0x200000500000 3145728 00:04:38.106 free 0x2000004fff40 64 00:04:38.106 unregister 0x200000400000 4194304 PASSED 00:04:38.106 free 0x200000a00000 4194304 00:04:38.106 unregister 0x200000800000 6291456 PASSED 00:04:38.106 malloc 8388608 00:04:38.106 register 0x200000400000 10485760 00:04:38.106 buf 0x200000600000 len 8388608 PASSED 00:04:38.106 free 0x200000600000 8388608 00:04:38.106 unregister 0x200000400000 10485760 PASSED 00:04:38.106 passed 00:04:38.106 00:04:38.106 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.106 suites 1 1 n/a 0 0 00:04:38.106 tests 1 1 1 0 0 00:04:38.106 asserts 15 15 15 0 n/a 00:04:38.106 00:04:38.106 Elapsed time = 0.008 seconds 00:04:38.106 00:04:38.106 real 0m0.057s 00:04:38.106 user 0m0.022s 00:04:38.106 sys 0m0.035s 00:04:38.106 18:56:40 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.106 18:56:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:38.106 ************************************ 00:04:38.106 END TEST env_mem_callbacks 00:04:38.106 ************************************ 00:04:38.106 18:56:40 env -- common/autotest_common.sh@1142 -- # return 0 00:04:38.106 00:04:38.106 real 0m6.099s 00:04:38.106 user 0m4.271s 00:04:38.106 sys 0m0.909s 00:04:38.106 18:56:40 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.106 18:56:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.106 ************************************ 00:04:38.106 END TEST env 00:04:38.106 ************************************ 00:04:38.106 18:56:40 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.106 18:56:40 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:38.106 18:56:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.106 18:56:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.106 18:56:40 -- common/autotest_common.sh@10 -- # set +x 00:04:38.366 ************************************ 00:04:38.366 START TEST rpc 00:04:38.366 ************************************ 00:04:38.366 18:56:40 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:38.366 * Looking for test storage... 00:04:38.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:38.366 18:56:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=117586 00:04:38.366 18:56:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.366 18:56:40 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:38.366 18:56:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 117586 00:04:38.366 18:56:40 rpc -- common/autotest_common.sh@829 -- # '[' -z 117586 ']' 00:04:38.366 18:56:40 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.366 18:56:40 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.366 18:56:40 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.366 18:56:40 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.366 18:56:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.366 [2024-07-12 18:56:40.840999] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:04:38.366 [2024-07-12 18:56:40.841048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117586 ] 00:04:38.366 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.366 [2024-07-12 18:56:40.907636] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.626 [2024-07-12 18:56:40.985825] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:38.626 [2024-07-12 18:56:40.985863] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 117586' to capture a snapshot of events at runtime. 00:04:38.626 [2024-07-12 18:56:40.985871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:38.626 [2024-07-12 18:56:40.985877] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:38.626 [2024-07-12 18:56:40.985882] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid117586 for offline analysis/debug. 00:04:38.626 [2024-07-12 18:56:40.985918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.197 18:56:41 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.197 18:56:41 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:39.197 18:56:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.197 18:56:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.197 18:56:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:39.197 18:56:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:39.197 18:56:41 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.197 18:56:41 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.197 18:56:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.197 ************************************ 00:04:39.197 START TEST rpc_integrity 00:04:39.197 ************************************ 00:04:39.197 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:39.197 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.197 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.197 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.197 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.197 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.197 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.197 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.197 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.197 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.197 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.197 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.197 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:39.197 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.197 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.197 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.197 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.197 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.197 { 00:04:39.197 "name": "Malloc0", 00:04:39.197 "aliases": [ 00:04:39.197 "ba490dc3-bc7e-4445-8cc7-90abed0ef3b4" 00:04:39.197 ], 00:04:39.197 "product_name": "Malloc disk", 00:04:39.197 "block_size": 512, 00:04:39.197 "num_blocks": 16384, 00:04:39.197 "uuid": "ba490dc3-bc7e-4445-8cc7-90abed0ef3b4", 00:04:39.197 "assigned_rate_limits": { 00:04:39.197 "rw_ios_per_sec": 0, 00:04:39.197 "rw_mbytes_per_sec": 0, 00:04:39.197 "r_mbytes_per_sec": 0, 00:04:39.197 "w_mbytes_per_sec": 0 00:04:39.197 }, 00:04:39.197 "claimed": false, 00:04:39.197 "zoned": false, 00:04:39.197 "supported_io_types": { 00:04:39.197 "read": true, 00:04:39.197 "write": true, 00:04:39.197 "unmap": true, 00:04:39.197 "flush": true, 00:04:39.197 "reset": true, 00:04:39.197 "nvme_admin": false, 00:04:39.197 "nvme_io": false, 00:04:39.197 "nvme_io_md": false, 00:04:39.197 "write_zeroes": true, 00:04:39.197 "zcopy": true, 00:04:39.197 "get_zone_info": false, 00:04:39.197 "zone_management": false, 00:04:39.197 "zone_append": false, 00:04:39.197 "compare": false, 00:04:39.197 "compare_and_write": false, 00:04:39.197 "abort": true, 00:04:39.197 "seek_hole": false, 00:04:39.197 "seek_data": false, 00:04:39.198 "copy": true, 00:04:39.198 "nvme_iov_md": false 00:04:39.198 }, 00:04:39.198 "memory_domains": [ 00:04:39.198 { 00:04:39.198 "dma_device_id": "system", 00:04:39.198 "dma_device_type": 1 00:04:39.198 }, 00:04:39.198 { 00:04:39.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.198 "dma_device_type": 2 00:04:39.198 } 00:04:39.198 ], 00:04:39.198 "driver_specific": {} 00:04:39.198 } 00:04:39.198 ]' 00:04:39.198 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.458 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.458 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:39.458 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.458 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.458 [2024-07-12 18:56:41.799737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:39.458 [2024-07-12 18:56:41.799767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.458 [2024-07-12 18:56:41.799780] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ee22d0 00:04:39.458 [2024-07-12 18:56:41.799787] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.458 [2024-07-12 18:56:41.800866] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.458 [2024-07-12 18:56:41.800886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.458 Passthru0 00:04:39.458 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.458 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.458 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.458 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.458 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.458 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.458 { 00:04:39.458 "name": "Malloc0", 00:04:39.458 "aliases": [ 00:04:39.458 "ba490dc3-bc7e-4445-8cc7-90abed0ef3b4" 00:04:39.458 ], 00:04:39.458 "product_name": "Malloc disk", 00:04:39.458 "block_size": 512, 00:04:39.458 "num_blocks": 16384, 00:04:39.458 "uuid": "ba490dc3-bc7e-4445-8cc7-90abed0ef3b4", 00:04:39.458 "assigned_rate_limits": { 00:04:39.458 "rw_ios_per_sec": 0, 00:04:39.458 "rw_mbytes_per_sec": 0, 00:04:39.458 "r_mbytes_per_sec": 0, 00:04:39.458 "w_mbytes_per_sec": 0 00:04:39.458 }, 00:04:39.458 "claimed": true, 00:04:39.458 "claim_type": "exclusive_write", 00:04:39.459 "zoned": false, 00:04:39.459 "supported_io_types": { 00:04:39.459 "read": true, 00:04:39.459 "write": true, 00:04:39.459 "unmap": true, 00:04:39.459 "flush": true, 00:04:39.459 "reset": true, 00:04:39.459 "nvme_admin": false, 00:04:39.459 "nvme_io": false, 00:04:39.459 "nvme_io_md": false, 00:04:39.459 "write_zeroes": true, 00:04:39.459 "zcopy": true, 00:04:39.459 "get_zone_info": false, 00:04:39.459 "zone_management": false, 00:04:39.459 "zone_append": false, 00:04:39.459 "compare": false, 00:04:39.459 "compare_and_write": false, 00:04:39.459 "abort": true, 00:04:39.459 "seek_hole": false, 00:04:39.459 "seek_data": false, 00:04:39.459 "copy": true, 00:04:39.459 "nvme_iov_md": false 00:04:39.459 }, 00:04:39.459 "memory_domains": [ 00:04:39.459 { 00:04:39.459 "dma_device_id": "system", 00:04:39.459 "dma_device_type": 1 00:04:39.459 }, 00:04:39.459 { 00:04:39.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.459 "dma_device_type": 2 00:04:39.459 } 00:04:39.459 ], 00:04:39.459 "driver_specific": {} 00:04:39.459 }, 00:04:39.459 { 00:04:39.459 "name": "Passthru0", 00:04:39.459 "aliases": [ 00:04:39.459 "b209ddd3-ec57-5940-9d8c-9b4958d138b6" 00:04:39.459 ], 00:04:39.459 "product_name": "passthru", 00:04:39.459 "block_size": 512, 00:04:39.459 "num_blocks": 16384, 00:04:39.459 "uuid": "b209ddd3-ec57-5940-9d8c-9b4958d138b6", 00:04:39.459 "assigned_rate_limits": { 00:04:39.459 "rw_ios_per_sec": 0, 00:04:39.459 "rw_mbytes_per_sec": 0, 00:04:39.459 "r_mbytes_per_sec": 0, 00:04:39.459 "w_mbytes_per_sec": 0 00:04:39.459 }, 00:04:39.459 "claimed": false, 00:04:39.459 "zoned": false, 00:04:39.459 "supported_io_types": { 00:04:39.459 "read": true, 00:04:39.459 "write": true, 00:04:39.459 "unmap": true, 00:04:39.459 "flush": true, 00:04:39.459 "reset": true, 00:04:39.459 "nvme_admin": false, 00:04:39.459 "nvme_io": false, 00:04:39.459 "nvme_io_md": false, 00:04:39.459 "write_zeroes": true, 00:04:39.459 "zcopy": true, 00:04:39.459 "get_zone_info": false, 00:04:39.459 "zone_management": false, 00:04:39.459 "zone_append": false, 00:04:39.459 "compare": false, 00:04:39.459 "compare_and_write": false, 00:04:39.459 "abort": true, 00:04:39.459 "seek_hole": false, 00:04:39.459 "seek_data": false, 00:04:39.459 "copy": true, 00:04:39.459 "nvme_iov_md": false 00:04:39.459 }, 00:04:39.459 "memory_domains": [ 00:04:39.459 { 00:04:39.459 "dma_device_id": "system", 00:04:39.459 "dma_device_type": 1 00:04:39.459 }, 00:04:39.459 { 00:04:39.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.459 "dma_device_type": 2 00:04:39.459 } 00:04:39.459 ], 00:04:39.459 "driver_specific": { 00:04:39.459 "passthru": { 00:04:39.459 "name": "Passthru0", 00:04:39.459 "base_bdev_name": "Malloc0" 00:04:39.459 } 00:04:39.459 } 00:04:39.459 } 00:04:39.459 ]' 00:04:39.459 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.459 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.459 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.459 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.459 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.459 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.459 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:39.459 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.459 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.459 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.459 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.459 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.459 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.459 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.459 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.459 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:39.459 18:56:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:39.459 00:04:39.459 real 0m0.274s 00:04:39.459 user 0m0.174s 00:04:39.459 sys 0m0.037s 00:04:39.459 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.459 18:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.459 ************************************ 00:04:39.459 END TEST rpc_integrity 00:04:39.459 ************************************ 00:04:39.459 18:56:41 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:39.459 18:56:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:39.459 18:56:41 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.459 18:56:41 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.459 18:56:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.459 ************************************ 00:04:39.459 START TEST rpc_plugins 00:04:39.459 ************************************ 00:04:39.459 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:39.459 18:56:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:39.459 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.459 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.719 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.719 18:56:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:39.719 18:56:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:39.719 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.719 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.719 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.719 18:56:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:39.719 { 00:04:39.719 "name": "Malloc1", 00:04:39.719 "aliases": [ 00:04:39.719 "1c9e37b4-424f-4004-ae7d-fdfc1c1b7d94" 00:04:39.719 ], 00:04:39.719 "product_name": "Malloc disk", 00:04:39.719 "block_size": 4096, 00:04:39.720 "num_blocks": 256, 00:04:39.720 "uuid": "1c9e37b4-424f-4004-ae7d-fdfc1c1b7d94", 00:04:39.720 "assigned_rate_limits": { 00:04:39.720 "rw_ios_per_sec": 0, 00:04:39.720 "rw_mbytes_per_sec": 0, 00:04:39.720 "r_mbytes_per_sec": 0, 00:04:39.720 "w_mbytes_per_sec": 0 00:04:39.720 }, 00:04:39.720 "claimed": false, 00:04:39.720 "zoned": false, 00:04:39.720 "supported_io_types": { 00:04:39.720 "read": true, 00:04:39.720 "write": true, 00:04:39.720 "unmap": true, 00:04:39.720 "flush": true, 00:04:39.720 "reset": true, 00:04:39.720 "nvme_admin": false, 00:04:39.720 "nvme_io": false, 00:04:39.720 "nvme_io_md": false, 00:04:39.720 "write_zeroes": true, 00:04:39.720 "zcopy": true, 00:04:39.720 "get_zone_info": false, 00:04:39.720 "zone_management": false, 00:04:39.720 "zone_append": false, 00:04:39.720 "compare": false, 00:04:39.720 "compare_and_write": false, 00:04:39.720 "abort": true, 00:04:39.720 "seek_hole": false, 00:04:39.720 "seek_data": false, 00:04:39.720 "copy": true, 00:04:39.720 "nvme_iov_md": false 00:04:39.720 }, 00:04:39.720 "memory_domains": [ 00:04:39.720 { 00:04:39.720 "dma_device_id": "system", 00:04:39.720 "dma_device_type": 1 00:04:39.720 }, 00:04:39.720 { 00:04:39.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.720 "dma_device_type": 2 00:04:39.720 } 00:04:39.720 ], 00:04:39.720 "driver_specific": {} 00:04:39.720 } 00:04:39.720 ]' 00:04:39.720 18:56:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:39.720 18:56:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:39.720 18:56:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:39.720 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.720 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.720 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.720 18:56:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:39.720 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.720 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.720 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.720 18:56:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:39.720 18:56:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:39.720 18:56:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:39.720 00:04:39.720 real 0m0.135s 00:04:39.720 user 0m0.087s 00:04:39.720 sys 0m0.014s 00:04:39.720 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.720 18:56:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.720 ************************************ 00:04:39.720 END TEST rpc_plugins 00:04:39.720 ************************************ 00:04:39.720 18:56:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:39.720 18:56:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:39.720 18:56:42 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.720 18:56:42 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.720 18:56:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.720 ************************************ 00:04:39.720 START TEST rpc_trace_cmd_test 00:04:39.720 ************************************ 00:04:39.720 18:56:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:39.720 18:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:39.720 18:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:39.720 18:56:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.720 18:56:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.720 18:56:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.720 18:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:39.720 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid117586", 00:04:39.720 "tpoint_group_mask": "0x8", 00:04:39.720 "iscsi_conn": { 00:04:39.720 "mask": "0x2", 00:04:39.720 "tpoint_mask": "0x0" 00:04:39.720 }, 00:04:39.720 "scsi": { 00:04:39.720 "mask": "0x4", 00:04:39.720 "tpoint_mask": "0x0" 00:04:39.720 }, 00:04:39.720 "bdev": { 00:04:39.720 "mask": "0x8", 00:04:39.720 "tpoint_mask": "0xffffffffffffffff" 00:04:39.720 }, 00:04:39.720 "nvmf_rdma": { 00:04:39.720 "mask": "0x10", 00:04:39.720 "tpoint_mask": "0x0" 00:04:39.720 }, 00:04:39.720 "nvmf_tcp": { 00:04:39.720 "mask": "0x20", 00:04:39.720 "tpoint_mask": "0x0" 00:04:39.720 }, 00:04:39.720 "ftl": { 00:04:39.720 "mask": "0x40", 00:04:39.720 "tpoint_mask": "0x0" 00:04:39.720 }, 00:04:39.720 "blobfs": { 00:04:39.720 "mask": "0x80", 00:04:39.720 "tpoint_mask": "0x0" 00:04:39.720 }, 00:04:39.720 "dsa": { 00:04:39.720 "mask": "0x200", 00:04:39.720 "tpoint_mask": "0x0" 00:04:39.720 }, 00:04:39.720 "thread": { 00:04:39.720 "mask": "0x400", 00:04:39.720 "tpoint_mask": "0x0" 00:04:39.720 }, 00:04:39.720 "nvme_pcie": { 00:04:39.720 "mask": "0x800", 00:04:39.720 "tpoint_mask": "0x0" 00:04:39.720 }, 00:04:39.720 "iaa": { 00:04:39.720 "mask": "0x1000", 00:04:39.720 "tpoint_mask": "0x0" 00:04:39.720 }, 00:04:39.720 "nvme_tcp": { 00:04:39.720 "mask": "0x2000", 00:04:39.720 "tpoint_mask": "0x0" 00:04:39.720 }, 00:04:39.720 "bdev_nvme": { 00:04:39.720 "mask": "0x4000", 00:04:39.720 "tpoint_mask": "0x0" 00:04:39.720 }, 00:04:39.720 "sock": { 00:04:39.720 "mask": "0x8000", 00:04:39.720 "tpoint_mask": "0x0" 00:04:39.720 } 00:04:39.720 }' 00:04:39.720 18:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:39.720 18:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:39.720 18:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:39.979 18:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:39.979 18:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:39.979 18:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:39.979 18:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:39.979 18:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:39.979 18:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:39.979 18:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:39.979 00:04:39.979 real 0m0.221s 00:04:39.979 user 0m0.187s 00:04:39.979 sys 0m0.027s 00:04:39.979 18:56:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.979 18:56:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.979 ************************************ 00:04:39.979 END TEST rpc_trace_cmd_test 00:04:39.979 ************************************ 00:04:39.979 18:56:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:39.979 18:56:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:39.979 18:56:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:39.979 18:56:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:39.979 18:56:42 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.979 18:56:42 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.979 18:56:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.979 ************************************ 00:04:39.979 START TEST rpc_daemon_integrity 00:04:39.979 ************************************ 00:04:39.979 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:39.979 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.979 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.979 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.979 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.979 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.979 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.238 { 00:04:40.238 "name": "Malloc2", 00:04:40.238 "aliases": [ 00:04:40.238 "d7991a1a-b5d2-44e6-899a-4b9e4be3afb0" 00:04:40.238 ], 00:04:40.238 "product_name": "Malloc disk", 00:04:40.238 "block_size": 512, 00:04:40.238 "num_blocks": 16384, 00:04:40.238 "uuid": "d7991a1a-b5d2-44e6-899a-4b9e4be3afb0", 00:04:40.238 "assigned_rate_limits": { 00:04:40.238 "rw_ios_per_sec": 0, 00:04:40.238 "rw_mbytes_per_sec": 0, 00:04:40.238 "r_mbytes_per_sec": 0, 00:04:40.238 "w_mbytes_per_sec": 0 00:04:40.238 }, 00:04:40.238 "claimed": false, 00:04:40.238 "zoned": false, 00:04:40.238 "supported_io_types": { 00:04:40.238 "read": true, 00:04:40.238 "write": true, 00:04:40.238 "unmap": true, 00:04:40.238 "flush": true, 00:04:40.238 "reset": true, 00:04:40.238 "nvme_admin": false, 00:04:40.238 "nvme_io": false, 00:04:40.238 "nvme_io_md": false, 00:04:40.238 "write_zeroes": true, 00:04:40.238 "zcopy": true, 00:04:40.238 "get_zone_info": false, 00:04:40.238 "zone_management": false, 00:04:40.238 "zone_append": false, 00:04:40.238 "compare": false, 00:04:40.238 "compare_and_write": false, 00:04:40.238 "abort": true, 00:04:40.238 "seek_hole": false, 00:04:40.238 "seek_data": false, 00:04:40.238 "copy": true, 00:04:40.238 "nvme_iov_md": false 00:04:40.238 }, 00:04:40.238 "memory_domains": [ 00:04:40.238 { 00:04:40.238 "dma_device_id": "system", 00:04:40.238 "dma_device_type": 1 00:04:40.238 }, 00:04:40.238 { 00:04:40.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.238 "dma_device_type": 2 00:04:40.238 } 00:04:40.238 ], 00:04:40.238 "driver_specific": {} 00:04:40.238 } 00:04:40.238 ]' 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.238 [2024-07-12 18:56:42.629986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:40.238 [2024-07-12 18:56:42.630016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.238 [2024-07-12 18:56:42.630028] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2079ac0 00:04:40.238 [2024-07-12 18:56:42.630034] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.238 [2024-07-12 18:56:42.630985] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.238 [2024-07-12 18:56:42.631005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.238 Passthru0 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.238 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.238 { 00:04:40.238 "name": "Malloc2", 00:04:40.238 "aliases": [ 00:04:40.238 "d7991a1a-b5d2-44e6-899a-4b9e4be3afb0" 00:04:40.238 ], 00:04:40.238 "product_name": "Malloc disk", 00:04:40.238 "block_size": 512, 00:04:40.238 "num_blocks": 16384, 00:04:40.238 "uuid": "d7991a1a-b5d2-44e6-899a-4b9e4be3afb0", 00:04:40.239 "assigned_rate_limits": { 00:04:40.239 "rw_ios_per_sec": 0, 00:04:40.239 "rw_mbytes_per_sec": 0, 00:04:40.239 "r_mbytes_per_sec": 0, 00:04:40.239 "w_mbytes_per_sec": 0 00:04:40.239 }, 00:04:40.239 "claimed": true, 00:04:40.239 "claim_type": "exclusive_write", 00:04:40.239 "zoned": false, 00:04:40.239 "supported_io_types": { 00:04:40.239 "read": true, 00:04:40.239 "write": true, 00:04:40.239 "unmap": true, 00:04:40.239 "flush": true, 00:04:40.239 "reset": true, 00:04:40.239 "nvme_admin": false, 00:04:40.239 "nvme_io": false, 00:04:40.239 "nvme_io_md": false, 00:04:40.239 "write_zeroes": true, 00:04:40.239 "zcopy": true, 00:04:40.239 "get_zone_info": false, 00:04:40.239 "zone_management": false, 00:04:40.239 "zone_append": false, 00:04:40.239 "compare": false, 00:04:40.239 "compare_and_write": false, 00:04:40.239 "abort": true, 00:04:40.239 "seek_hole": false, 00:04:40.239 "seek_data": false, 00:04:40.239 "copy": true, 00:04:40.239 "nvme_iov_md": false 00:04:40.239 }, 00:04:40.239 "memory_domains": [ 00:04:40.239 { 00:04:40.239 "dma_device_id": "system", 00:04:40.239 "dma_device_type": 1 00:04:40.239 }, 00:04:40.239 { 00:04:40.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.239 "dma_device_type": 2 00:04:40.239 } 00:04:40.239 ], 00:04:40.239 "driver_specific": {} 00:04:40.239 }, 00:04:40.239 { 00:04:40.239 "name": "Passthru0", 00:04:40.239 "aliases": [ 00:04:40.239 "44657ca9-10d9-529e-8108-2882864f370e" 00:04:40.239 ], 00:04:40.239 "product_name": "passthru", 00:04:40.239 "block_size": 512, 00:04:40.239 "num_blocks": 16384, 00:04:40.239 "uuid": "44657ca9-10d9-529e-8108-2882864f370e", 00:04:40.239 "assigned_rate_limits": { 00:04:40.239 "rw_ios_per_sec": 0, 00:04:40.239 "rw_mbytes_per_sec": 0, 00:04:40.239 "r_mbytes_per_sec": 0, 00:04:40.239 "w_mbytes_per_sec": 0 00:04:40.239 }, 00:04:40.239 "claimed": false, 00:04:40.239 "zoned": false, 00:04:40.239 "supported_io_types": { 00:04:40.239 "read": true, 00:04:40.239 "write": true, 00:04:40.239 "unmap": true, 00:04:40.239 "flush": true, 00:04:40.239 "reset": true, 00:04:40.239 "nvme_admin": false, 00:04:40.239 "nvme_io": false, 00:04:40.239 "nvme_io_md": false, 00:04:40.239 "write_zeroes": true, 00:04:40.239 "zcopy": true, 00:04:40.239 "get_zone_info": false, 00:04:40.239 "zone_management": false, 00:04:40.239 "zone_append": false, 00:04:40.239 "compare": false, 00:04:40.239 "compare_and_write": false, 00:04:40.239 "abort": true, 00:04:40.239 "seek_hole": false, 00:04:40.239 "seek_data": false, 00:04:40.239 "copy": true, 00:04:40.239 "nvme_iov_md": false 00:04:40.239 }, 00:04:40.239 "memory_domains": [ 00:04:40.239 { 00:04:40.239 "dma_device_id": "system", 00:04:40.239 "dma_device_type": 1 00:04:40.239 }, 00:04:40.239 { 00:04:40.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.239 "dma_device_type": 2 00:04:40.239 } 00:04:40.239 ], 00:04:40.239 "driver_specific": { 00:04:40.239 "passthru": { 00:04:40.239 "name": "Passthru0", 00:04:40.239 "base_bdev_name": "Malloc2" 00:04:40.239 } 00:04:40.239 } 00:04:40.239 } 00:04:40.239 ]' 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.239 00:04:40.239 real 0m0.281s 00:04:40.239 user 0m0.171s 00:04:40.239 sys 0m0.044s 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.239 18:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.239 ************************************ 00:04:40.239 END TEST rpc_daemon_integrity 00:04:40.239 ************************************ 00:04:40.239 18:56:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.239 18:56:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:40.239 18:56:42 rpc -- rpc/rpc.sh@84 -- # killprocess 117586 00:04:40.239 18:56:42 rpc -- common/autotest_common.sh@948 -- # '[' -z 117586 ']' 00:04:40.497 18:56:42 rpc -- common/autotest_common.sh@952 -- # kill -0 117586 00:04:40.497 18:56:42 rpc -- common/autotest_common.sh@953 -- # uname 00:04:40.497 18:56:42 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.497 18:56:42 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 117586 00:04:40.497 18:56:42 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.497 18:56:42 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.497 18:56:42 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 117586' 00:04:40.497 killing process with pid 117586 00:04:40.497 18:56:42 rpc -- common/autotest_common.sh@967 -- # kill 117586 00:04:40.497 18:56:42 rpc -- common/autotest_common.sh@972 -- # wait 117586 00:04:40.756 00:04:40.756 real 0m2.459s 00:04:40.756 user 0m3.145s 00:04:40.756 sys 0m0.699s 00:04:40.756 18:56:43 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.756 18:56:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.756 ************************************ 00:04:40.756 END TEST rpc 00:04:40.756 ************************************ 00:04:40.756 18:56:43 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.756 18:56:43 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:40.756 18:56:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.756 18:56:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.756 18:56:43 -- common/autotest_common.sh@10 -- # set +x 00:04:40.756 ************************************ 00:04:40.756 START TEST skip_rpc 00:04:40.756 ************************************ 00:04:40.756 18:56:43 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:40.756 * Looking for test storage... 00:04:40.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:40.756 18:56:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.756 18:56:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.756 18:56:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:40.756 18:56:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.756 18:56:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.756 18:56:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.014 ************************************ 00:04:41.014 START TEST skip_rpc 00:04:41.014 ************************************ 00:04:41.014 18:56:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:41.014 18:56:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=118222 00:04:41.014 18:56:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.014 18:56:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:41.014 18:56:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:41.014 [2024-07-12 18:56:43.401238] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:04:41.014 [2024-07-12 18:56:43.401278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118222 ] 00:04:41.014 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.014 [2024-07-12 18:56:43.465407] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.014 [2024-07-12 18:56:43.538077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 118222 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 118222 ']' 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 118222 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118222 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118222' 00:04:46.294 killing process with pid 118222 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 118222 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 118222 00:04:46.294 00:04:46.294 real 0m5.367s 00:04:46.294 user 0m5.121s 00:04:46.294 sys 0m0.271s 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.294 18:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.294 ************************************ 00:04:46.294 END TEST skip_rpc 00:04:46.294 ************************************ 00:04:46.294 18:56:48 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:46.294 18:56:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.294 18:56:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.294 18:56:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.294 18:56:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.294 ************************************ 00:04:46.294 START TEST skip_rpc_with_json 00:04:46.294 ************************************ 00:04:46.294 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:46.294 18:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.294 18:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=119171 00:04:46.294 18:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.294 18:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.294 18:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 119171 00:04:46.294 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 119171 ']' 00:04:46.294 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.294 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.294 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.294 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.294 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.294 [2024-07-12 18:56:48.841695] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:04:46.294 [2024-07-12 18:56:48.841739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119171 ] 00:04:46.555 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.555 [2024-07-12 18:56:48.907938] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.555 [2024-07-12 18:56:48.977448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.125 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.125 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:47.125 18:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.125 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.125 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.125 [2024-07-12 18:56:49.651721] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.125 request: 00:04:47.125 { 00:04:47.125 "trtype": "tcp", 00:04:47.125 "method": "nvmf_get_transports", 00:04:47.125 "req_id": 1 00:04:47.125 } 00:04:47.125 Got JSON-RPC error response 00:04:47.125 response: 00:04:47.125 { 00:04:47.125 "code": -19, 00:04:47.125 "message": "No such device" 00:04:47.126 } 00:04:47.126 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:47.126 18:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.126 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.126 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.126 [2024-07-12 18:56:49.663833] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.126 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.126 18:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.126 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.126 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.386 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.386 18:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:47.386 { 00:04:47.386 "subsystems": [ 00:04:47.386 { 00:04:47.386 "subsystem": "vfio_user_target", 00:04:47.386 "config": null 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "subsystem": "keyring", 00:04:47.386 "config": [] 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "subsystem": "iobuf", 00:04:47.386 "config": [ 00:04:47.386 { 00:04:47.386 "method": "iobuf_set_options", 00:04:47.386 "params": { 00:04:47.386 "small_pool_count": 8192, 00:04:47.386 "large_pool_count": 1024, 00:04:47.386 "small_bufsize": 8192, 00:04:47.386 "large_bufsize": 135168 00:04:47.386 } 00:04:47.386 } 00:04:47.386 ] 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "subsystem": "sock", 00:04:47.386 "config": [ 00:04:47.386 { 00:04:47.386 "method": "sock_set_default_impl", 00:04:47.386 "params": { 00:04:47.386 "impl_name": "posix" 00:04:47.386 } 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "method": "sock_impl_set_options", 00:04:47.386 "params": { 00:04:47.386 "impl_name": "ssl", 00:04:47.386 "recv_buf_size": 4096, 00:04:47.386 "send_buf_size": 4096, 00:04:47.386 "enable_recv_pipe": true, 00:04:47.386 "enable_quickack": false, 00:04:47.386 "enable_placement_id": 0, 00:04:47.386 "enable_zerocopy_send_server": true, 00:04:47.386 "enable_zerocopy_send_client": false, 00:04:47.386 "zerocopy_threshold": 0, 00:04:47.386 "tls_version": 0, 00:04:47.386 "enable_ktls": false 00:04:47.386 } 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "method": "sock_impl_set_options", 00:04:47.386 "params": { 00:04:47.386 "impl_name": "posix", 00:04:47.386 "recv_buf_size": 2097152, 00:04:47.386 "send_buf_size": 2097152, 00:04:47.386 "enable_recv_pipe": true, 00:04:47.386 "enable_quickack": false, 00:04:47.386 "enable_placement_id": 0, 00:04:47.386 "enable_zerocopy_send_server": true, 00:04:47.386 "enable_zerocopy_send_client": false, 00:04:47.386 "zerocopy_threshold": 0, 00:04:47.386 "tls_version": 0, 00:04:47.386 "enable_ktls": false 00:04:47.386 } 00:04:47.386 } 00:04:47.386 ] 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "subsystem": "vmd", 00:04:47.386 "config": [] 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "subsystem": "accel", 00:04:47.386 "config": [ 00:04:47.386 { 00:04:47.386 "method": "accel_set_options", 00:04:47.386 "params": { 00:04:47.386 "small_cache_size": 128, 00:04:47.386 "large_cache_size": 16, 00:04:47.386 "task_count": 2048, 00:04:47.386 "sequence_count": 2048, 00:04:47.386 "buf_count": 2048 00:04:47.386 } 00:04:47.386 } 00:04:47.386 ] 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "subsystem": "bdev", 00:04:47.386 "config": [ 00:04:47.386 { 00:04:47.386 "method": "bdev_set_options", 00:04:47.386 "params": { 00:04:47.386 "bdev_io_pool_size": 65535, 00:04:47.386 "bdev_io_cache_size": 256, 00:04:47.386 "bdev_auto_examine": true, 00:04:47.386 "iobuf_small_cache_size": 128, 00:04:47.386 "iobuf_large_cache_size": 16 00:04:47.386 } 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "method": "bdev_raid_set_options", 00:04:47.386 "params": { 00:04:47.386 "process_window_size_kb": 1024 00:04:47.386 } 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "method": "bdev_iscsi_set_options", 00:04:47.386 "params": { 00:04:47.386 "timeout_sec": 30 00:04:47.386 } 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "method": "bdev_nvme_set_options", 00:04:47.386 "params": { 00:04:47.386 "action_on_timeout": "none", 00:04:47.386 "timeout_us": 0, 00:04:47.386 "timeout_admin_us": 0, 00:04:47.386 "keep_alive_timeout_ms": 10000, 00:04:47.386 "arbitration_burst": 0, 00:04:47.386 "low_priority_weight": 0, 00:04:47.386 "medium_priority_weight": 0, 00:04:47.386 "high_priority_weight": 0, 00:04:47.386 "nvme_adminq_poll_period_us": 10000, 00:04:47.386 "nvme_ioq_poll_period_us": 0, 00:04:47.386 "io_queue_requests": 0, 00:04:47.386 "delay_cmd_submit": true, 00:04:47.386 "transport_retry_count": 4, 00:04:47.386 "bdev_retry_count": 3, 00:04:47.386 "transport_ack_timeout": 0, 00:04:47.386 "ctrlr_loss_timeout_sec": 0, 00:04:47.386 "reconnect_delay_sec": 0, 00:04:47.386 "fast_io_fail_timeout_sec": 0, 00:04:47.386 "disable_auto_failback": false, 00:04:47.386 "generate_uuids": false, 00:04:47.386 "transport_tos": 0, 00:04:47.386 "nvme_error_stat": false, 00:04:47.386 "rdma_srq_size": 0, 00:04:47.386 "io_path_stat": false, 00:04:47.386 "allow_accel_sequence": false, 00:04:47.386 "rdma_max_cq_size": 0, 00:04:47.386 "rdma_cm_event_timeout_ms": 0, 00:04:47.386 "dhchap_digests": [ 00:04:47.386 "sha256", 00:04:47.386 "sha384", 00:04:47.386 "sha512" 00:04:47.386 ], 00:04:47.386 "dhchap_dhgroups": [ 00:04:47.386 "null", 00:04:47.386 "ffdhe2048", 00:04:47.386 "ffdhe3072", 00:04:47.386 "ffdhe4096", 00:04:47.386 "ffdhe6144", 00:04:47.386 "ffdhe8192" 00:04:47.386 ] 00:04:47.386 } 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "method": "bdev_nvme_set_hotplug", 00:04:47.386 "params": { 00:04:47.386 "period_us": 100000, 00:04:47.386 "enable": false 00:04:47.386 } 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "method": "bdev_wait_for_examine" 00:04:47.386 } 00:04:47.386 ] 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "subsystem": "scsi", 00:04:47.386 "config": null 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "subsystem": "scheduler", 00:04:47.386 "config": [ 00:04:47.386 { 00:04:47.386 "method": "framework_set_scheduler", 00:04:47.386 "params": { 00:04:47.386 "name": "static" 00:04:47.386 } 00:04:47.386 } 00:04:47.386 ] 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "subsystem": "vhost_scsi", 00:04:47.386 "config": [] 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "subsystem": "vhost_blk", 00:04:47.386 "config": [] 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "subsystem": "ublk", 00:04:47.386 "config": [] 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "subsystem": "nbd", 00:04:47.386 "config": [] 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "subsystem": "nvmf", 00:04:47.386 "config": [ 00:04:47.386 { 00:04:47.386 "method": "nvmf_set_config", 00:04:47.386 "params": { 00:04:47.386 "discovery_filter": "match_any", 00:04:47.386 "admin_cmd_passthru": { 00:04:47.386 "identify_ctrlr": false 00:04:47.386 } 00:04:47.386 } 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "method": "nvmf_set_max_subsystems", 00:04:47.386 "params": { 00:04:47.386 "max_subsystems": 1024 00:04:47.386 } 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "method": "nvmf_set_crdt", 00:04:47.386 "params": { 00:04:47.386 "crdt1": 0, 00:04:47.386 "crdt2": 0, 00:04:47.386 "crdt3": 0 00:04:47.386 } 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "method": "nvmf_create_transport", 00:04:47.386 "params": { 00:04:47.386 "trtype": "TCP", 00:04:47.386 "max_queue_depth": 128, 00:04:47.386 "max_io_qpairs_per_ctrlr": 127, 00:04:47.386 "in_capsule_data_size": 4096, 00:04:47.386 "max_io_size": 131072, 00:04:47.386 "io_unit_size": 131072, 00:04:47.386 "max_aq_depth": 128, 00:04:47.386 "num_shared_buffers": 511, 00:04:47.386 "buf_cache_size": 4294967295, 00:04:47.386 "dif_insert_or_strip": false, 00:04:47.386 "zcopy": false, 00:04:47.386 "c2h_success": true, 00:04:47.386 "sock_priority": 0, 00:04:47.386 "abort_timeout_sec": 1, 00:04:47.386 "ack_timeout": 0, 00:04:47.386 "data_wr_pool_size": 0 00:04:47.386 } 00:04:47.386 } 00:04:47.386 ] 00:04:47.386 }, 00:04:47.386 { 00:04:47.386 "subsystem": "iscsi", 00:04:47.386 "config": [ 00:04:47.386 { 00:04:47.386 "method": "iscsi_set_options", 00:04:47.386 "params": { 00:04:47.386 "node_base": "iqn.2016-06.io.spdk", 00:04:47.386 "max_sessions": 128, 00:04:47.386 "max_connections_per_session": 2, 00:04:47.386 "max_queue_depth": 64, 00:04:47.386 "default_time2wait": 2, 00:04:47.386 "default_time2retain": 20, 00:04:47.386 "first_burst_length": 8192, 00:04:47.386 "immediate_data": true, 00:04:47.386 "allow_duplicated_isid": false, 00:04:47.386 "error_recovery_level": 0, 00:04:47.386 "nop_timeout": 60, 00:04:47.386 "nop_in_interval": 30, 00:04:47.386 "disable_chap": false, 00:04:47.386 "require_chap": false, 00:04:47.386 "mutual_chap": false, 00:04:47.387 "chap_group": 0, 00:04:47.387 "max_large_datain_per_connection": 64, 00:04:47.387 "max_r2t_per_connection": 4, 00:04:47.387 "pdu_pool_size": 36864, 00:04:47.387 "immediate_data_pool_size": 16384, 00:04:47.387 "data_out_pool_size": 2048 00:04:47.387 } 00:04:47.387 } 00:04:47.387 ] 00:04:47.387 } 00:04:47.387 ] 00:04:47.387 } 00:04:47.387 18:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:47.387 18:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 119171 00:04:47.387 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 119171 ']' 00:04:47.387 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 119171 00:04:47.387 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:47.387 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.387 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119171 00:04:47.387 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.387 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.387 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119171' 00:04:47.387 killing process with pid 119171 00:04:47.387 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 119171 00:04:47.387 18:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 119171 00:04:47.647 18:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=119407 00:04:47.647 18:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:47.647 18:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.928 18:56:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 119407 00:04:52.928 18:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 119407 ']' 00:04:52.928 18:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 119407 00:04:52.928 18:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:52.928 18:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.928 18:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119407 00:04:52.928 18:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.928 18:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.928 18:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119407' 00:04:52.928 killing process with pid 119407 00:04:52.928 18:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 119407 00:04:52.928 18:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 119407 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:53.189 00:04:53.189 real 0m6.762s 00:04:53.189 user 0m6.563s 00:04:53.189 sys 0m0.622s 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.189 ************************************ 00:04:53.189 END TEST skip_rpc_with_json 00:04:53.189 ************************************ 00:04:53.189 18:56:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.189 18:56:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:53.189 18:56:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.189 18:56:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.189 18:56:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.189 ************************************ 00:04:53.189 START TEST skip_rpc_with_delay 00:04:53.189 ************************************ 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:53.189 [2024-07-12 18:56:55.671728] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:53.189 [2024-07-12 18:56:55.671783] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:53.189 00:04:53.189 real 0m0.067s 00:04:53.189 user 0m0.042s 00:04:53.189 sys 0m0.024s 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.189 18:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:53.189 ************************************ 00:04:53.189 END TEST skip_rpc_with_delay 00:04:53.189 ************************************ 00:04:53.189 18:56:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.189 18:56:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:53.189 18:56:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:53.189 18:56:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:53.189 18:56:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.189 18:56:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.189 18:56:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.189 ************************************ 00:04:53.189 START TEST exit_on_failed_rpc_init 00:04:53.189 ************************************ 00:04:53.189 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:53.189 18:56:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=120378 00:04:53.189 18:56:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 120378 00:04:53.190 18:56:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.190 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 120378 ']' 00:04:53.190 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.190 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.190 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.190 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.190 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.450 [2024-07-12 18:56:55.801419] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:04:53.450 [2024-07-12 18:56:55.801463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120378 ] 00:04:53.450 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.450 [2024-07-12 18:56:55.869744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.450 [2024-07-12 18:56:55.949479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:54.390 [2024-07-12 18:56:56.667004] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:04:54.390 [2024-07-12 18:56:56.667048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120611 ] 00:04:54.390 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.390 [2024-07-12 18:56:56.730531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.390 [2024-07-12 18:56:56.803564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.390 [2024-07-12 18:56:56.803625] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:54.390 [2024-07-12 18:56:56.803634] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:54.390 [2024-07-12 18:56:56.803639] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 120378 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 120378 ']' 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 120378 00:04:54.390 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:54.391 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:54.391 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120378 00:04:54.391 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:54.391 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:54.391 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120378' 00:04:54.391 killing process with pid 120378 00:04:54.391 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 120378 00:04:54.391 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 120378 00:04:54.960 00:04:54.960 real 0m1.477s 00:04:54.960 user 0m1.700s 00:04:54.960 sys 0m0.414s 00:04:54.960 18:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.960 18:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.960 ************************************ 00:04:54.960 END TEST exit_on_failed_rpc_init 00:04:54.960 ************************************ 00:04:54.960 18:56:57 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:54.960 18:56:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:54.961 00:04:54.961 real 0m14.032s 00:04:54.961 user 0m13.565s 00:04:54.961 sys 0m1.578s 00:04:54.961 18:56:57 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.961 18:56:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.961 ************************************ 00:04:54.961 END TEST skip_rpc 00:04:54.961 ************************************ 00:04:54.961 18:56:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:54.961 18:56:57 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:54.961 18:56:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.961 18:56:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.961 18:56:57 -- common/autotest_common.sh@10 -- # set +x 00:04:54.961 ************************************ 00:04:54.961 START TEST rpc_client 00:04:54.961 ************************************ 00:04:54.961 18:56:57 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:54.961 * Looking for test storage... 00:04:54.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:54.961 18:56:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:54.961 OK 00:04:54.961 18:56:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:54.961 00:04:54.961 real 0m0.114s 00:04:54.961 user 0m0.056s 00:04:54.961 sys 0m0.067s 00:04:54.961 18:56:57 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.961 18:56:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:54.961 ************************************ 00:04:54.961 END TEST rpc_client 00:04:54.961 ************************************ 00:04:54.961 18:56:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:54.961 18:56:57 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:54.961 18:56:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.961 18:56:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.961 18:56:57 -- common/autotest_common.sh@10 -- # set +x 00:04:54.961 ************************************ 00:04:54.961 START TEST json_config 00:04:54.961 ************************************ 00:04:54.961 18:56:57 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:55.221 18:56:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.221 18:56:57 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:55.221 18:56:57 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.221 18:56:57 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.221 18:56:57 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.221 18:56:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.222 18:56:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.222 18:56:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.222 18:56:57 json_config -- paths/export.sh@5 -- # export PATH 00:04:55.222 18:56:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.222 18:56:57 json_config -- nvmf/common.sh@47 -- # : 0 00:04:55.222 18:56:57 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:55.222 18:56:57 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:55.222 18:56:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.222 18:56:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.222 18:56:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.222 18:56:57 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:55.222 18:56:57 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:55.222 18:56:57 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:55.222 INFO: JSON configuration test init 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:55.222 18:56:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:55.222 18:56:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:55.222 18:56:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:55.222 18:56:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.222 18:56:57 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:55.222 18:56:57 json_config -- json_config/common.sh@9 -- # local app=target 00:04:55.222 18:56:57 json_config -- json_config/common.sh@10 -- # shift 00:04:55.222 18:56:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.222 18:56:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.222 18:56:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.222 18:56:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.222 18:56:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.222 18:56:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=120845 00:04:55.222 18:56:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.222 Waiting for target to run... 00:04:55.222 18:56:57 json_config -- json_config/common.sh@25 -- # waitforlisten 120845 /var/tmp/spdk_tgt.sock 00:04:55.222 18:56:57 json_config -- common/autotest_common.sh@829 -- # '[' -z 120845 ']' 00:04:55.222 18:56:57 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.222 18:56:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:55.222 18:56:57 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.222 18:56:57 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.222 18:56:57 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.222 18:56:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.222 [2024-07-12 18:56:57.667997] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:04:55.222 [2024-07-12 18:56:57.668052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120845 ] 00:04:55.222 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.792 [2024-07-12 18:56:58.113372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.792 [2024-07-12 18:56:58.205489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.052 18:56:58 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.052 18:56:58 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:56.052 18:56:58 json_config -- json_config/common.sh@26 -- # echo '' 00:04:56.052 00:04:56.052 18:56:58 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:56.052 18:56:58 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:56.052 18:56:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.052 18:56:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.052 18:56:58 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:56.052 18:56:58 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:56.052 18:56:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.052 18:56:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.052 18:56:58 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:56.052 18:56:58 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:56.052 18:56:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:59.347 18:57:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.347 18:57:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:59.347 18:57:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:59.347 18:57:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.347 18:57:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:59.347 18:57:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:59.347 18:57:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:59.347 18:57:01 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:59.347 18:57:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:59.607 MallocForNvmf0 00:04:59.607 18:57:01 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.607 18:57:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.607 MallocForNvmf1 00:04:59.866 18:57:02 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.866 18:57:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.866 [2024-07-12 18:57:02.321113] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.866 18:57:02 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:59.866 18:57:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:00.125 18:57:02 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:00.125 18:57:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:00.125 18:57:02 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:00.125 18:57:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:00.385 18:57:02 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.385 18:57:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.643 [2024-07-12 18:57:02.987178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:00.643 18:57:02 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:00.643 18:57:03 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.643 18:57:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.643 18:57:03 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:00.643 18:57:03 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.643 18:57:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.643 18:57:03 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:00.643 18:57:03 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:00.643 18:57:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:00.901 MallocBdevForConfigChangeCheck 00:05:00.901 18:57:03 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:00.901 18:57:03 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.901 18:57:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.901 18:57:03 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:00.901 18:57:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.160 18:57:03 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:01.160 INFO: shutting down applications... 00:05:01.160 18:57:03 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:01.160 18:57:03 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:01.160 18:57:03 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:01.160 18:57:03 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:03.067 Calling clear_iscsi_subsystem 00:05:03.067 Calling clear_nvmf_subsystem 00:05:03.067 Calling clear_nbd_subsystem 00:05:03.067 Calling clear_ublk_subsystem 00:05:03.067 Calling clear_vhost_blk_subsystem 00:05:03.067 Calling clear_vhost_scsi_subsystem 00:05:03.067 Calling clear_bdev_subsystem 00:05:03.067 18:57:05 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:03.067 18:57:05 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:03.067 18:57:05 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:03.067 18:57:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.067 18:57:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:03.067 18:57:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:03.067 18:57:05 json_config -- json_config/json_config.sh@345 -- # break 00:05:03.067 18:57:05 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:03.067 18:57:05 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:03.067 18:57:05 json_config -- json_config/common.sh@31 -- # local app=target 00:05:03.067 18:57:05 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:03.067 18:57:05 json_config -- json_config/common.sh@35 -- # [[ -n 120845 ]] 00:05:03.067 18:57:05 json_config -- json_config/common.sh@38 -- # kill -SIGINT 120845 00:05:03.067 18:57:05 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:03.067 18:57:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.067 18:57:05 json_config -- json_config/common.sh@41 -- # kill -0 120845 00:05:03.067 18:57:05 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.637 18:57:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.637 18:57:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.637 18:57:06 json_config -- json_config/common.sh@41 -- # kill -0 120845 00:05:03.637 18:57:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:03.637 18:57:06 json_config -- json_config/common.sh@43 -- # break 00:05:03.637 18:57:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:03.637 18:57:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:03.637 SPDK target shutdown done 00:05:03.637 18:57:06 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:03.637 INFO: relaunching applications... 00:05:03.637 18:57:06 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.637 18:57:06 json_config -- json_config/common.sh@9 -- # local app=target 00:05:03.637 18:57:06 json_config -- json_config/common.sh@10 -- # shift 00:05:03.637 18:57:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.637 18:57:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.637 18:57:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.637 18:57:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.637 18:57:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.637 18:57:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=122582 00:05:03.637 18:57:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.637 Waiting for target to run... 00:05:03.637 18:57:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:03.637 18:57:06 json_config -- json_config/common.sh@25 -- # waitforlisten 122582 /var/tmp/spdk_tgt.sock 00:05:03.637 18:57:06 json_config -- common/autotest_common.sh@829 -- # '[' -z 122582 ']' 00:05:03.637 18:57:06 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.637 18:57:06 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.637 18:57:06 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.637 18:57:06 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.637 18:57:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.637 [2024-07-12 18:57:06.089299] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:03.637 [2024-07-12 18:57:06.089355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122582 ] 00:05:03.637 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.897 [2024-07-12 18:57:06.373586] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.897 [2024-07-12 18:57:06.443068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.187 [2024-07-12 18:57:09.455143] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.187 [2024-07-12 18:57:09.487438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:07.187 18:57:09 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.187 18:57:09 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:07.187 18:57:09 json_config -- json_config/common.sh@26 -- # echo '' 00:05:07.187 00:05:07.187 18:57:09 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:07.187 18:57:09 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:07.187 INFO: Checking if target configuration is the same... 00:05:07.187 18:57:09 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.187 18:57:09 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:07.187 18:57:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.187 + '[' 2 -ne 2 ']' 00:05:07.187 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:07.187 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:07.187 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:07.187 +++ basename /dev/fd/62 00:05:07.187 ++ mktemp /tmp/62.XXX 00:05:07.187 + tmp_file_1=/tmp/62.8Vs 00:05:07.187 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.187 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:07.187 + tmp_file_2=/tmp/spdk_tgt_config.json.xCD 00:05:07.187 + ret=0 00:05:07.187 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:07.445 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:07.445 + diff -u /tmp/62.8Vs /tmp/spdk_tgt_config.json.xCD 00:05:07.445 + echo 'INFO: JSON config files are the same' 00:05:07.446 INFO: JSON config files are the same 00:05:07.446 + rm /tmp/62.8Vs /tmp/spdk_tgt_config.json.xCD 00:05:07.446 + exit 0 00:05:07.446 18:57:09 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:07.446 18:57:09 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:07.446 INFO: changing configuration and checking if this can be detected... 00:05:07.446 18:57:09 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:07.446 18:57:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:07.703 18:57:10 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.703 18:57:10 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:07.703 18:57:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.703 + '[' 2 -ne 2 ']' 00:05:07.703 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:07.703 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:07.703 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:07.703 +++ basename /dev/fd/62 00:05:07.703 ++ mktemp /tmp/62.XXX 00:05:07.703 + tmp_file_1=/tmp/62.Yy4 00:05:07.703 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.703 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:07.703 + tmp_file_2=/tmp/spdk_tgt_config.json.rsi 00:05:07.703 + ret=0 00:05:07.703 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:07.962 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:07.962 + diff -u /tmp/62.Yy4 /tmp/spdk_tgt_config.json.rsi 00:05:07.962 + ret=1 00:05:07.962 + echo '=== Start of file: /tmp/62.Yy4 ===' 00:05:07.962 + cat /tmp/62.Yy4 00:05:07.962 + echo '=== End of file: /tmp/62.Yy4 ===' 00:05:07.962 + echo '' 00:05:07.962 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rsi ===' 00:05:07.962 + cat /tmp/spdk_tgt_config.json.rsi 00:05:07.962 + echo '=== End of file: /tmp/spdk_tgt_config.json.rsi ===' 00:05:07.962 + echo '' 00:05:07.962 + rm /tmp/62.Yy4 /tmp/spdk_tgt_config.json.rsi 00:05:07.962 + exit 1 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:07.962 INFO: configuration change detected. 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:07.962 18:57:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.962 18:57:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@317 -- # [[ -n 122582 ]] 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:07.962 18:57:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.962 18:57:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:07.962 18:57:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.962 18:57:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.962 18:57:10 json_config -- json_config/json_config.sh@323 -- # killprocess 122582 00:05:07.962 18:57:10 json_config -- common/autotest_common.sh@948 -- # '[' -z 122582 ']' 00:05:07.962 18:57:10 json_config -- common/autotest_common.sh@952 -- # kill -0 122582 00:05:08.221 18:57:10 json_config -- common/autotest_common.sh@953 -- # uname 00:05:08.221 18:57:10 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.221 18:57:10 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122582 00:05:08.221 18:57:10 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.221 18:57:10 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.221 18:57:10 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122582' 00:05:08.221 killing process with pid 122582 00:05:08.221 18:57:10 json_config -- common/autotest_common.sh@967 -- # kill 122582 00:05:08.221 18:57:10 json_config -- common/autotest_common.sh@972 -- # wait 122582 00:05:09.602 18:57:12 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.602 18:57:12 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:09.602 18:57:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.602 18:57:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.602 18:57:12 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:09.602 18:57:12 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:09.602 INFO: Success 00:05:09.602 00:05:09.602 real 0m14.566s 00:05:09.602 user 0m15.322s 00:05:09.602 sys 0m1.865s 00:05:09.602 18:57:12 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.602 18:57:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.602 ************************************ 00:05:09.602 END TEST json_config 00:05:09.602 ************************************ 00:05:09.602 18:57:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:09.602 18:57:12 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:09.602 18:57:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.602 18:57:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.602 18:57:12 -- common/autotest_common.sh@10 -- # set +x 00:05:09.602 ************************************ 00:05:09.602 START TEST json_config_extra_key 00:05:09.602 ************************************ 00:05:09.602 18:57:12 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:09.862 18:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.862 18:57:12 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.862 18:57:12 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.862 18:57:12 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.862 18:57:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.862 18:57:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.862 18:57:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.862 18:57:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:09.862 18:57:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:09.862 18:57:12 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:09.862 18:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:09.862 18:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:09.862 18:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:09.862 18:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:09.862 18:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:09.862 18:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:09.863 18:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:09.863 18:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:09.863 18:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:09.863 18:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:09.863 18:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:09.863 INFO: launching applications... 00:05:09.863 18:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:09.863 18:57:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:09.863 18:57:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:09.863 18:57:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.863 18:57:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.863 18:57:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.863 18:57:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.863 18:57:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.863 18:57:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=124018 00:05:09.863 18:57:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.863 Waiting for target to run... 00:05:09.863 18:57:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 124018 /var/tmp/spdk_tgt.sock 00:05:09.863 18:57:12 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 124018 ']' 00:05:09.863 18:57:12 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:09.863 18:57:12 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.863 18:57:12 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.863 18:57:12 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.863 18:57:12 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.863 18:57:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:09.863 [2024-07-12 18:57:12.294375] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:09.863 [2024-07-12 18:57:12.294424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124018 ] 00:05:09.863 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.122 [2024-07-12 18:57:12.574810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.122 [2024-07-12 18:57:12.644265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.691 18:57:13 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.691 18:57:13 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:10.691 18:57:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:10.691 00:05:10.691 18:57:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:10.691 INFO: shutting down applications... 00:05:10.691 18:57:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:10.691 18:57:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:10.691 18:57:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:10.691 18:57:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 124018 ]] 00:05:10.691 18:57:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 124018 00:05:10.691 18:57:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:10.691 18:57:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.691 18:57:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 124018 00:05:10.691 18:57:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.260 18:57:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.260 18:57:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.260 18:57:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 124018 00:05:11.260 18:57:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.260 18:57:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:11.260 18:57:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.260 18:57:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.260 SPDK target shutdown done 00:05:11.260 18:57:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:11.260 Success 00:05:11.260 00:05:11.260 real 0m1.445s 00:05:11.260 user 0m1.217s 00:05:11.260 sys 0m0.377s 00:05:11.260 18:57:13 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.260 18:57:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:11.260 ************************************ 00:05:11.260 END TEST json_config_extra_key 00:05:11.260 ************************************ 00:05:11.260 18:57:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.260 18:57:13 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.260 18:57:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.260 18:57:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.260 18:57:13 -- common/autotest_common.sh@10 -- # set +x 00:05:11.260 ************************************ 00:05:11.260 START TEST alias_rpc 00:05:11.260 ************************************ 00:05:11.260 18:57:13 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.260 * Looking for test storage... 00:05:11.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:11.260 18:57:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:11.260 18:57:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=124295 00:05:11.260 18:57:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 124295 00:05:11.260 18:57:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.260 18:57:13 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 124295 ']' 00:05:11.260 18:57:13 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.260 18:57:13 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.260 18:57:13 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.260 18:57:13 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.260 18:57:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.260 [2024-07-12 18:57:13.803502] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:11.260 [2024-07-12 18:57:13.803555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124295 ] 00:05:11.260 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.519 [2024-07-12 18:57:13.870246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.519 [2024-07-12 18:57:13.948302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.088 18:57:14 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.088 18:57:14 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:12.088 18:57:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:12.347 18:57:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 124295 00:05:12.347 18:57:14 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 124295 ']' 00:05:12.347 18:57:14 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 124295 00:05:12.347 18:57:14 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:12.347 18:57:14 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.347 18:57:14 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124295 00:05:12.347 18:57:14 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.347 18:57:14 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.347 18:57:14 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124295' 00:05:12.347 killing process with pid 124295 00:05:12.347 18:57:14 alias_rpc -- common/autotest_common.sh@967 -- # kill 124295 00:05:12.347 18:57:14 alias_rpc -- common/autotest_common.sh@972 -- # wait 124295 00:05:12.606 00:05:12.606 real 0m1.502s 00:05:12.606 user 0m1.635s 00:05:12.606 sys 0m0.415s 00:05:12.606 18:57:15 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.606 18:57:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.606 ************************************ 00:05:12.606 END TEST alias_rpc 00:05:12.606 ************************************ 00:05:12.866 18:57:15 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.866 18:57:15 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:12.866 18:57:15 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:12.866 18:57:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.866 18:57:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.866 18:57:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.866 ************************************ 00:05:12.866 START TEST spdkcli_tcp 00:05:12.866 ************************************ 00:05:12.866 18:57:15 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:12.866 * Looking for test storage... 00:05:12.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:12.866 18:57:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:12.866 18:57:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:12.866 18:57:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:12.866 18:57:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:12.866 18:57:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:12.866 18:57:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:12.866 18:57:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:12.866 18:57:15 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.866 18:57:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.866 18:57:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=124600 00:05:12.866 18:57:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 124600 00:05:12.866 18:57:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:12.866 18:57:15 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 124600 ']' 00:05:12.866 18:57:15 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.866 18:57:15 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.866 18:57:15 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.866 18:57:15 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.866 18:57:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.866 [2024-07-12 18:57:15.381114] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:12.866 [2024-07-12 18:57:15.381164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124600 ] 00:05:12.866 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.866 [2024-07-12 18:57:15.434952] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.126 [2024-07-12 18:57:15.516968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.126 [2024-07-12 18:57:15.516969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.695 18:57:16 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.695 18:57:16 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:13.695 18:57:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=124816 00:05:13.695 18:57:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:13.695 18:57:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:13.955 [ 00:05:13.955 "bdev_malloc_delete", 00:05:13.955 "bdev_malloc_create", 00:05:13.955 "bdev_null_resize", 00:05:13.955 "bdev_null_delete", 00:05:13.955 "bdev_null_create", 00:05:13.955 "bdev_nvme_cuse_unregister", 00:05:13.955 "bdev_nvme_cuse_register", 00:05:13.955 "bdev_opal_new_user", 00:05:13.955 "bdev_opal_set_lock_state", 00:05:13.955 "bdev_opal_delete", 00:05:13.955 "bdev_opal_get_info", 00:05:13.955 "bdev_opal_create", 00:05:13.955 "bdev_nvme_opal_revert", 00:05:13.955 "bdev_nvme_opal_init", 00:05:13.955 "bdev_nvme_send_cmd", 00:05:13.955 "bdev_nvme_get_path_iostat", 00:05:13.955 "bdev_nvme_get_mdns_discovery_info", 00:05:13.955 "bdev_nvme_stop_mdns_discovery", 00:05:13.955 "bdev_nvme_start_mdns_discovery", 00:05:13.955 "bdev_nvme_set_multipath_policy", 00:05:13.955 "bdev_nvme_set_preferred_path", 00:05:13.955 "bdev_nvme_get_io_paths", 00:05:13.955 "bdev_nvme_remove_error_injection", 00:05:13.955 "bdev_nvme_add_error_injection", 00:05:13.955 "bdev_nvme_get_discovery_info", 00:05:13.955 "bdev_nvme_stop_discovery", 00:05:13.955 "bdev_nvme_start_discovery", 00:05:13.955 "bdev_nvme_get_controller_health_info", 00:05:13.955 "bdev_nvme_disable_controller", 00:05:13.955 "bdev_nvme_enable_controller", 00:05:13.955 "bdev_nvme_reset_controller", 00:05:13.955 "bdev_nvme_get_transport_statistics", 00:05:13.955 "bdev_nvme_apply_firmware", 00:05:13.955 "bdev_nvme_detach_controller", 00:05:13.955 "bdev_nvme_get_controllers", 00:05:13.955 "bdev_nvme_attach_controller", 00:05:13.955 "bdev_nvme_set_hotplug", 00:05:13.955 "bdev_nvme_set_options", 00:05:13.955 "bdev_passthru_delete", 00:05:13.955 "bdev_passthru_create", 00:05:13.955 "bdev_lvol_set_parent_bdev", 00:05:13.955 "bdev_lvol_set_parent", 00:05:13.955 "bdev_lvol_check_shallow_copy", 00:05:13.955 "bdev_lvol_start_shallow_copy", 00:05:13.955 "bdev_lvol_grow_lvstore", 00:05:13.955 "bdev_lvol_get_lvols", 00:05:13.955 "bdev_lvol_get_lvstores", 00:05:13.955 "bdev_lvol_delete", 00:05:13.955 "bdev_lvol_set_read_only", 00:05:13.955 "bdev_lvol_resize", 00:05:13.955 "bdev_lvol_decouple_parent", 00:05:13.955 "bdev_lvol_inflate", 00:05:13.955 "bdev_lvol_rename", 00:05:13.955 "bdev_lvol_clone_bdev", 00:05:13.955 "bdev_lvol_clone", 00:05:13.955 "bdev_lvol_snapshot", 00:05:13.955 "bdev_lvol_create", 00:05:13.955 "bdev_lvol_delete_lvstore", 00:05:13.955 "bdev_lvol_rename_lvstore", 00:05:13.955 "bdev_lvol_create_lvstore", 00:05:13.955 "bdev_raid_set_options", 00:05:13.955 "bdev_raid_remove_base_bdev", 00:05:13.955 "bdev_raid_add_base_bdev", 00:05:13.955 "bdev_raid_delete", 00:05:13.955 "bdev_raid_create", 00:05:13.955 "bdev_raid_get_bdevs", 00:05:13.955 "bdev_error_inject_error", 00:05:13.955 "bdev_error_delete", 00:05:13.955 "bdev_error_create", 00:05:13.955 "bdev_split_delete", 00:05:13.955 "bdev_split_create", 00:05:13.955 "bdev_delay_delete", 00:05:13.955 "bdev_delay_create", 00:05:13.955 "bdev_delay_update_latency", 00:05:13.955 "bdev_zone_block_delete", 00:05:13.955 "bdev_zone_block_create", 00:05:13.955 "blobfs_create", 00:05:13.955 "blobfs_detect", 00:05:13.955 "blobfs_set_cache_size", 00:05:13.955 "bdev_aio_delete", 00:05:13.955 "bdev_aio_rescan", 00:05:13.955 "bdev_aio_create", 00:05:13.955 "bdev_ftl_set_property", 00:05:13.955 "bdev_ftl_get_properties", 00:05:13.955 "bdev_ftl_get_stats", 00:05:13.955 "bdev_ftl_unmap", 00:05:13.955 "bdev_ftl_unload", 00:05:13.955 "bdev_ftl_delete", 00:05:13.955 "bdev_ftl_load", 00:05:13.955 "bdev_ftl_create", 00:05:13.955 "bdev_virtio_attach_controller", 00:05:13.955 "bdev_virtio_scsi_get_devices", 00:05:13.955 "bdev_virtio_detach_controller", 00:05:13.955 "bdev_virtio_blk_set_hotplug", 00:05:13.955 "bdev_iscsi_delete", 00:05:13.955 "bdev_iscsi_create", 00:05:13.955 "bdev_iscsi_set_options", 00:05:13.955 "accel_error_inject_error", 00:05:13.955 "ioat_scan_accel_module", 00:05:13.955 "dsa_scan_accel_module", 00:05:13.955 "iaa_scan_accel_module", 00:05:13.955 "vfu_virtio_create_scsi_endpoint", 00:05:13.955 "vfu_virtio_scsi_remove_target", 00:05:13.955 "vfu_virtio_scsi_add_target", 00:05:13.955 "vfu_virtio_create_blk_endpoint", 00:05:13.955 "vfu_virtio_delete_endpoint", 00:05:13.955 "keyring_file_remove_key", 00:05:13.955 "keyring_file_add_key", 00:05:13.955 "keyring_linux_set_options", 00:05:13.955 "iscsi_get_histogram", 00:05:13.955 "iscsi_enable_histogram", 00:05:13.955 "iscsi_set_options", 00:05:13.955 "iscsi_get_auth_groups", 00:05:13.955 "iscsi_auth_group_remove_secret", 00:05:13.955 "iscsi_auth_group_add_secret", 00:05:13.955 "iscsi_delete_auth_group", 00:05:13.955 "iscsi_create_auth_group", 00:05:13.955 "iscsi_set_discovery_auth", 00:05:13.955 "iscsi_get_options", 00:05:13.955 "iscsi_target_node_request_logout", 00:05:13.955 "iscsi_target_node_set_redirect", 00:05:13.955 "iscsi_target_node_set_auth", 00:05:13.955 "iscsi_target_node_add_lun", 00:05:13.955 "iscsi_get_stats", 00:05:13.955 "iscsi_get_connections", 00:05:13.955 "iscsi_portal_group_set_auth", 00:05:13.955 "iscsi_start_portal_group", 00:05:13.955 "iscsi_delete_portal_group", 00:05:13.955 "iscsi_create_portal_group", 00:05:13.955 "iscsi_get_portal_groups", 00:05:13.955 "iscsi_delete_target_node", 00:05:13.955 "iscsi_target_node_remove_pg_ig_maps", 00:05:13.955 "iscsi_target_node_add_pg_ig_maps", 00:05:13.955 "iscsi_create_target_node", 00:05:13.955 "iscsi_get_target_nodes", 00:05:13.955 "iscsi_delete_initiator_group", 00:05:13.955 "iscsi_initiator_group_remove_initiators", 00:05:13.955 "iscsi_initiator_group_add_initiators", 00:05:13.955 "iscsi_create_initiator_group", 00:05:13.955 "iscsi_get_initiator_groups", 00:05:13.955 "nvmf_set_crdt", 00:05:13.955 "nvmf_set_config", 00:05:13.955 "nvmf_set_max_subsystems", 00:05:13.955 "nvmf_stop_mdns_prr", 00:05:13.955 "nvmf_publish_mdns_prr", 00:05:13.955 "nvmf_subsystem_get_listeners", 00:05:13.955 "nvmf_subsystem_get_qpairs", 00:05:13.955 "nvmf_subsystem_get_controllers", 00:05:13.955 "nvmf_get_stats", 00:05:13.955 "nvmf_get_transports", 00:05:13.955 "nvmf_create_transport", 00:05:13.955 "nvmf_get_targets", 00:05:13.955 "nvmf_delete_target", 00:05:13.955 "nvmf_create_target", 00:05:13.955 "nvmf_subsystem_allow_any_host", 00:05:13.955 "nvmf_subsystem_remove_host", 00:05:13.955 "nvmf_subsystem_add_host", 00:05:13.955 "nvmf_ns_remove_host", 00:05:13.955 "nvmf_ns_add_host", 00:05:13.955 "nvmf_subsystem_remove_ns", 00:05:13.955 "nvmf_subsystem_add_ns", 00:05:13.955 "nvmf_subsystem_listener_set_ana_state", 00:05:13.955 "nvmf_discovery_get_referrals", 00:05:13.955 "nvmf_discovery_remove_referral", 00:05:13.955 "nvmf_discovery_add_referral", 00:05:13.955 "nvmf_subsystem_remove_listener", 00:05:13.955 "nvmf_subsystem_add_listener", 00:05:13.955 "nvmf_delete_subsystem", 00:05:13.955 "nvmf_create_subsystem", 00:05:13.955 "nvmf_get_subsystems", 00:05:13.955 "env_dpdk_get_mem_stats", 00:05:13.955 "nbd_get_disks", 00:05:13.955 "nbd_stop_disk", 00:05:13.955 "nbd_start_disk", 00:05:13.955 "ublk_recover_disk", 00:05:13.955 "ublk_get_disks", 00:05:13.955 "ublk_stop_disk", 00:05:13.955 "ublk_start_disk", 00:05:13.955 "ublk_destroy_target", 00:05:13.955 "ublk_create_target", 00:05:13.955 "virtio_blk_create_transport", 00:05:13.955 "virtio_blk_get_transports", 00:05:13.955 "vhost_controller_set_coalescing", 00:05:13.955 "vhost_get_controllers", 00:05:13.955 "vhost_delete_controller", 00:05:13.955 "vhost_create_blk_controller", 00:05:13.955 "vhost_scsi_controller_remove_target", 00:05:13.955 "vhost_scsi_controller_add_target", 00:05:13.955 "vhost_start_scsi_controller", 00:05:13.955 "vhost_create_scsi_controller", 00:05:13.955 "thread_set_cpumask", 00:05:13.955 "framework_get_governor", 00:05:13.955 "framework_get_scheduler", 00:05:13.955 "framework_set_scheduler", 00:05:13.955 "framework_get_reactors", 00:05:13.955 "thread_get_io_channels", 00:05:13.955 "thread_get_pollers", 00:05:13.955 "thread_get_stats", 00:05:13.955 "framework_monitor_context_switch", 00:05:13.955 "spdk_kill_instance", 00:05:13.955 "log_enable_timestamps", 00:05:13.955 "log_get_flags", 00:05:13.955 "log_clear_flag", 00:05:13.955 "log_set_flag", 00:05:13.955 "log_get_level", 00:05:13.955 "log_set_level", 00:05:13.955 "log_get_print_level", 00:05:13.955 "log_set_print_level", 00:05:13.955 "framework_enable_cpumask_locks", 00:05:13.955 "framework_disable_cpumask_locks", 00:05:13.955 "framework_wait_init", 00:05:13.955 "framework_start_init", 00:05:13.955 "scsi_get_devices", 00:05:13.955 "bdev_get_histogram", 00:05:13.955 "bdev_enable_histogram", 00:05:13.955 "bdev_set_qos_limit", 00:05:13.955 "bdev_set_qd_sampling_period", 00:05:13.955 "bdev_get_bdevs", 00:05:13.955 "bdev_reset_iostat", 00:05:13.955 "bdev_get_iostat", 00:05:13.955 "bdev_examine", 00:05:13.955 "bdev_wait_for_examine", 00:05:13.955 "bdev_set_options", 00:05:13.955 "notify_get_notifications", 00:05:13.955 "notify_get_types", 00:05:13.955 "accel_get_stats", 00:05:13.955 "accel_set_options", 00:05:13.955 "accel_set_driver", 00:05:13.955 "accel_crypto_key_destroy", 00:05:13.955 "accel_crypto_keys_get", 00:05:13.955 "accel_crypto_key_create", 00:05:13.955 "accel_assign_opc", 00:05:13.955 "accel_get_module_info", 00:05:13.955 "accel_get_opc_assignments", 00:05:13.955 "vmd_rescan", 00:05:13.955 "vmd_remove_device", 00:05:13.955 "vmd_enable", 00:05:13.955 "sock_get_default_impl", 00:05:13.955 "sock_set_default_impl", 00:05:13.955 "sock_impl_set_options", 00:05:13.955 "sock_impl_get_options", 00:05:13.955 "iobuf_get_stats", 00:05:13.955 "iobuf_set_options", 00:05:13.955 "keyring_get_keys", 00:05:13.955 "framework_get_pci_devices", 00:05:13.955 "framework_get_config", 00:05:13.955 "framework_get_subsystems", 00:05:13.955 "vfu_tgt_set_base_path", 00:05:13.955 "trace_get_info", 00:05:13.955 "trace_get_tpoint_group_mask", 00:05:13.955 "trace_disable_tpoint_group", 00:05:13.955 "trace_enable_tpoint_group", 00:05:13.955 "trace_clear_tpoint_mask", 00:05:13.955 "trace_set_tpoint_mask", 00:05:13.955 "spdk_get_version", 00:05:13.955 "rpc_get_methods" 00:05:13.955 ] 00:05:13.955 18:57:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:13.955 18:57:16 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:13.955 18:57:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.955 18:57:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:13.955 18:57:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 124600 00:05:13.955 18:57:16 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 124600 ']' 00:05:13.955 18:57:16 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 124600 00:05:13.955 18:57:16 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:13.955 18:57:16 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.955 18:57:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124600 00:05:13.955 18:57:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.955 18:57:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.955 18:57:16 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124600' 00:05:13.955 killing process with pid 124600 00:05:13.955 18:57:16 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 124600 00:05:13.955 18:57:16 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 124600 00:05:14.214 00:05:14.214 real 0m1.520s 00:05:14.214 user 0m2.848s 00:05:14.214 sys 0m0.426s 00:05:14.214 18:57:16 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.214 18:57:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.214 ************************************ 00:05:14.214 END TEST spdkcli_tcp 00:05:14.214 ************************************ 00:05:14.496 18:57:16 -- common/autotest_common.sh@1142 -- # return 0 00:05:14.496 18:57:16 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.496 18:57:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.496 18:57:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.496 18:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:14.496 ************************************ 00:05:14.496 START TEST dpdk_mem_utility 00:05:14.496 ************************************ 00:05:14.496 18:57:16 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.496 * Looking for test storage... 00:05:14.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:14.496 18:57:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:14.496 18:57:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=125024 00:05:14.496 18:57:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 125024 00:05:14.496 18:57:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.496 18:57:16 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 125024 ']' 00:05:14.496 18:57:16 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.496 18:57:16 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.496 18:57:16 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.496 18:57:16 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.496 18:57:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.496 [2024-07-12 18:57:16.959666] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:14.496 [2024-07-12 18:57:16.959719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125024 ] 00:05:14.496 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.496 [2024-07-12 18:57:17.028923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.755 [2024-07-12 18:57:17.109598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.323 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.323 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:15.323 18:57:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:15.323 18:57:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:15.323 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.323 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.323 { 00:05:15.323 "filename": "/tmp/spdk_mem_dump.txt" 00:05:15.323 } 00:05:15.323 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.323 18:57:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:15.323 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:15.323 1 heaps totaling size 814.000000 MiB 00:05:15.323 size: 814.000000 MiB heap id: 0 00:05:15.323 end heaps---------- 00:05:15.323 8 mempools totaling size 598.116089 MiB 00:05:15.323 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:15.323 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:15.323 size: 84.521057 MiB name: bdev_io_125024 00:05:15.323 size: 51.011292 MiB name: evtpool_125024 00:05:15.323 size: 50.003479 MiB name: msgpool_125024 00:05:15.323 size: 21.763794 MiB name: PDU_Pool 00:05:15.323 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:15.323 size: 0.026123 MiB name: Session_Pool 00:05:15.323 end mempools------- 00:05:15.323 6 memzones totaling size 4.142822 MiB 00:05:15.323 size: 1.000366 MiB name: RG_ring_0_125024 00:05:15.323 size: 1.000366 MiB name: RG_ring_1_125024 00:05:15.323 size: 1.000366 MiB name: RG_ring_4_125024 00:05:15.323 size: 1.000366 MiB name: RG_ring_5_125024 00:05:15.323 size: 0.125366 MiB name: RG_ring_2_125024 00:05:15.323 size: 0.015991 MiB name: RG_ring_3_125024 00:05:15.323 end memzones------- 00:05:15.323 18:57:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:15.323 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:15.323 list of free elements. size: 12.519348 MiB 00:05:15.323 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:15.323 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:15.323 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:15.323 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:15.323 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:15.323 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:15.323 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:15.323 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:15.323 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:15.323 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:15.323 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:15.323 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:15.323 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:15.323 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:15.323 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:15.323 list of standard malloc elements. size: 199.218079 MiB 00:05:15.323 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:15.323 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:15.323 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:15.324 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:15.324 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:15.324 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:15.324 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:15.324 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:15.324 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:15.324 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:15.324 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:15.324 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:15.324 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:15.324 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:15.324 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:15.324 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:15.324 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:15.324 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:15.324 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:15.324 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:15.324 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:15.324 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:15.324 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:15.324 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:15.324 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:15.324 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:15.324 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:15.324 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:15.324 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:15.324 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:15.324 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:15.324 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:15.324 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:15.324 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:15.324 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:15.324 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:15.324 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:15.324 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:15.324 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:15.324 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:15.324 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:15.324 list of memzone associated elements. size: 602.262573 MiB 00:05:15.324 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:15.324 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:15.324 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:15.324 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:15.324 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:15.324 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_125024_0 00:05:15.324 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:15.324 associated memzone info: size: 48.002930 MiB name: MP_evtpool_125024_0 00:05:15.324 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:15.324 associated memzone info: size: 48.002930 MiB name: MP_msgpool_125024_0 00:05:15.324 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:15.324 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:15.324 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:15.324 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:15.324 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:15.324 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_125024 00:05:15.324 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:15.324 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_125024 00:05:15.324 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:15.324 associated memzone info: size: 1.007996 MiB name: MP_evtpool_125024 00:05:15.324 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:15.324 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:15.324 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:15.324 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:15.324 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:15.324 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:15.324 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:15.324 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:15.324 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:15.324 associated memzone info: size: 1.000366 MiB name: RG_ring_0_125024 00:05:15.324 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:15.324 associated memzone info: size: 1.000366 MiB name: RG_ring_1_125024 00:05:15.324 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:15.324 associated memzone info: size: 1.000366 MiB name: RG_ring_4_125024 00:05:15.324 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:15.324 associated memzone info: size: 1.000366 MiB name: RG_ring_5_125024 00:05:15.324 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:15.324 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_125024 00:05:15.324 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:15.324 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:15.324 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:15.324 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:15.324 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:15.324 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:15.324 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:15.324 associated memzone info: size: 0.125366 MiB name: RG_ring_2_125024 00:05:15.324 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:15.324 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:15.324 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:15.324 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:15.324 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:15.324 associated memzone info: size: 0.015991 MiB name: RG_ring_3_125024 00:05:15.324 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:15.324 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:15.324 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:15.324 associated memzone info: size: 0.000183 MiB name: MP_msgpool_125024 00:05:15.324 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:15.324 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_125024 00:05:15.324 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:15.324 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:15.324 18:57:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:15.324 18:57:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 125024 00:05:15.324 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 125024 ']' 00:05:15.324 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 125024 00:05:15.324 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:15.324 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.324 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125024 00:05:15.583 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.583 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.583 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125024' 00:05:15.583 killing process with pid 125024 00:05:15.583 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 125024 00:05:15.583 18:57:17 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 125024 00:05:15.842 00:05:15.842 real 0m1.392s 00:05:15.842 user 0m1.467s 00:05:15.842 sys 0m0.394s 00:05:15.842 18:57:18 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.842 18:57:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.842 ************************************ 00:05:15.842 END TEST dpdk_mem_utility 00:05:15.842 ************************************ 00:05:15.842 18:57:18 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.842 18:57:18 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:15.842 18:57:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.842 18:57:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.842 18:57:18 -- common/autotest_common.sh@10 -- # set +x 00:05:15.842 ************************************ 00:05:15.842 START TEST event 00:05:15.842 ************************************ 00:05:15.842 18:57:18 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:15.842 * Looking for test storage... 00:05:15.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:15.842 18:57:18 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:15.842 18:57:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:15.842 18:57:18 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.842 18:57:18 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:15.842 18:57:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.842 18:57:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.842 ************************************ 00:05:15.842 START TEST event_perf 00:05:15.842 ************************************ 00:05:15.842 18:57:18 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:16.101 Running I/O for 1 seconds...[2024-07-12 18:57:18.421346] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:16.101 [2024-07-12 18:57:18.421410] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125389 ] 00:05:16.101 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.101 [2024-07-12 18:57:18.495818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.101 [2024-07-12 18:57:18.570811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.101 [2024-07-12 18:57:18.570920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.101 [2024-07-12 18:57:18.571025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.101 Running I/O for 1 seconds...[2024-07-12 18:57:18.571025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.477 00:05:17.477 lcore 0: 206424 00:05:17.477 lcore 1: 206422 00:05:17.477 lcore 2: 206422 00:05:17.477 lcore 3: 206423 00:05:17.477 done. 00:05:17.477 00:05:17.477 real 0m1.242s 00:05:17.477 user 0m4.150s 00:05:17.477 sys 0m0.089s 00:05:17.477 18:57:19 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.477 18:57:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.477 ************************************ 00:05:17.477 END TEST event_perf 00:05:17.477 ************************************ 00:05:17.477 18:57:19 event -- common/autotest_common.sh@1142 -- # return 0 00:05:17.477 18:57:19 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:17.477 18:57:19 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:17.477 18:57:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.477 18:57:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.477 ************************************ 00:05:17.477 START TEST event_reactor 00:05:17.477 ************************************ 00:05:17.477 18:57:19 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:17.477 [2024-07-12 18:57:19.726892] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:17.477 [2024-07-12 18:57:19.726948] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125599 ] 00:05:17.477 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.477 [2024-07-12 18:57:19.797740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.477 [2024-07-12 18:57:19.870156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.415 test_start 00:05:18.415 oneshot 00:05:18.415 tick 100 00:05:18.415 tick 100 00:05:18.415 tick 250 00:05:18.415 tick 100 00:05:18.415 tick 100 00:05:18.415 tick 100 00:05:18.415 tick 250 00:05:18.415 tick 500 00:05:18.415 tick 100 00:05:18.415 tick 100 00:05:18.415 tick 250 00:05:18.415 tick 100 00:05:18.415 tick 100 00:05:18.415 test_end 00:05:18.415 00:05:18.415 real 0m1.233s 00:05:18.415 user 0m1.147s 00:05:18.415 sys 0m0.082s 00:05:18.415 18:57:20 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.415 18:57:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:18.415 ************************************ 00:05:18.415 END TEST event_reactor 00:05:18.415 ************************************ 00:05:18.415 18:57:20 event -- common/autotest_common.sh@1142 -- # return 0 00:05:18.415 18:57:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.415 18:57:20 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:18.415 18:57:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.415 18:57:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.674 ************************************ 00:05:18.674 START TEST event_reactor_perf 00:05:18.674 ************************************ 00:05:18.674 18:57:21 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.674 [2024-07-12 18:57:21.030937] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:18.674 [2024-07-12 18:57:21.031000] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125796 ] 00:05:18.674 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.674 [2024-07-12 18:57:21.103079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.674 [2024-07-12 18:57:21.175964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.054 test_start 00:05:20.054 test_end 00:05:20.054 Performance: 509293 events per second 00:05:20.054 00:05:20.054 real 0m1.233s 00:05:20.054 user 0m1.143s 00:05:20.054 sys 0m0.086s 00:05:20.054 18:57:22 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.054 18:57:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.054 ************************************ 00:05:20.054 END TEST event_reactor_perf 00:05:20.054 ************************************ 00:05:20.054 18:57:22 event -- common/autotest_common.sh@1142 -- # return 0 00:05:20.054 18:57:22 event -- event/event.sh@49 -- # uname -s 00:05:20.054 18:57:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:20.054 18:57:22 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:20.054 18:57:22 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.054 18:57:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.054 18:57:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.054 ************************************ 00:05:20.054 START TEST event_scheduler 00:05:20.054 ************************************ 00:05:20.054 18:57:22 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:20.054 * Looking for test storage... 00:05:20.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:20.054 18:57:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:20.054 18:57:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=126090 00:05:20.054 18:57:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.054 18:57:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:20.054 18:57:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 126090 00:05:20.054 18:57:22 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 126090 ']' 00:05:20.054 18:57:22 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.054 18:57:22 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.054 18:57:22 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.054 18:57:22 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.054 18:57:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.054 [2024-07-12 18:57:22.453356] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:20.054 [2024-07-12 18:57:22.453412] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126090 ] 00:05:20.054 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.054 [2024-07-12 18:57:22.521409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.054 [2024-07-12 18:57:22.603348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.054 [2024-07-12 18:57:22.603458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.054 [2024-07-12 18:57:22.603562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.054 [2024-07-12 18:57:22.603562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.990 18:57:23 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.990 18:57:23 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:20.990 18:57:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:20.990 18:57:23 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.990 18:57:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.990 [2024-07-12 18:57:23.269945] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:20.990 [2024-07-12 18:57:23.269961] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:20.990 [2024-07-12 18:57:23.269969] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:20.990 [2024-07-12 18:57:23.269975] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:20.990 [2024-07-12 18:57:23.269980] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:20.990 18:57:23 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.990 18:57:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:20.990 18:57:23 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.990 18:57:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.990 [2024-07-12 18:57:23.341893] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:20.990 18:57:23 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.990 18:57:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:20.990 18:57:23 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.990 18:57:23 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.990 18:57:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.990 ************************************ 00:05:20.990 START TEST scheduler_create_thread 00:05:20.990 ************************************ 00:05:20.990 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:20.990 18:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:20.990 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.990 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.990 2 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.991 3 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.991 4 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.991 5 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.991 6 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.991 7 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.991 8 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.991 9 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.991 10 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.991 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.558 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.558 18:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:21.558 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.558 18:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.939 18:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.939 18:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:22.939 18:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:22.939 18:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.939 18:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.317 18:57:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.317 00:05:24.317 real 0m3.099s 00:05:24.317 user 0m0.023s 00:05:24.317 sys 0m0.005s 00:05:24.317 18:57:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.317 18:57:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.317 ************************************ 00:05:24.317 END TEST scheduler_create_thread 00:05:24.317 ************************************ 00:05:24.317 18:57:26 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:24.317 18:57:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:24.317 18:57:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 126090 00:05:24.317 18:57:26 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 126090 ']' 00:05:24.317 18:57:26 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 126090 00:05:24.317 18:57:26 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:24.317 18:57:26 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.317 18:57:26 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126090 00:05:24.317 18:57:26 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:24.317 18:57:26 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:24.317 18:57:26 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126090' 00:05:24.317 killing process with pid 126090 00:05:24.317 18:57:26 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 126090 00:05:24.317 18:57:26 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 126090 00:05:24.317 [2024-07-12 18:57:26.857060] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:24.576 00:05:24.576 real 0m4.749s 00:05:24.576 user 0m9.223s 00:05:24.576 sys 0m0.376s 00:05:24.576 18:57:27 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.576 18:57:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.576 ************************************ 00:05:24.576 END TEST event_scheduler 00:05:24.576 ************************************ 00:05:24.576 18:57:27 event -- common/autotest_common.sh@1142 -- # return 0 00:05:24.576 18:57:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:24.576 18:57:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:24.576 18:57:27 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.576 18:57:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.576 18:57:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.576 ************************************ 00:05:24.576 START TEST app_repeat 00:05:24.576 ************************************ 00:05:24.576 18:57:27 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:24.576 18:57:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.576 18:57:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.576 18:57:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:24.576 18:57:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.576 18:57:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:24.576 18:57:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:24.576 18:57:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:24.836 18:57:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=126918 00:05:24.836 18:57:27 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:24.836 18:57:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.836 18:57:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 126918' 00:05:24.836 Process app_repeat pid: 126918 00:05:24.836 18:57:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.836 18:57:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:24.836 spdk_app_start Round 0 00:05:24.836 18:57:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 126918 /var/tmp/spdk-nbd.sock 00:05:24.836 18:57:27 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 126918 ']' 00:05:24.836 18:57:27 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.836 18:57:27 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.836 18:57:27 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.836 18:57:27 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.836 18:57:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.836 [2024-07-12 18:57:27.175511] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:24.836 [2024-07-12 18:57:27.175560] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126918 ] 00:05:24.836 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.836 [2024-07-12 18:57:27.241798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.836 [2024-07-12 18:57:27.316766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.836 [2024-07-12 18:57:27.316767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.775 18:57:27 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.775 18:57:27 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:25.775 18:57:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.775 Malloc0 00:05:25.775 18:57:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.035 Malloc1 00:05:26.035 18:57:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.035 /dev/nbd0 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.035 18:57:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:26.035 18:57:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:26.035 18:57:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:26.035 18:57:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:26.035 18:57:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:26.035 18:57:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:26.035 18:57:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:26.035 18:57:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:26.035 18:57:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.035 1+0 records in 00:05:26.035 1+0 records out 00:05:26.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022354 s, 18.3 MB/s 00:05:26.035 18:57:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.035 18:57:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:26.035 18:57:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.035 18:57:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:26.035 18:57:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.035 18:57:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.295 /dev/nbd1 00:05:26.295 18:57:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.295 18:57:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.295 18:57:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:26.295 18:57:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:26.295 18:57:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:26.295 18:57:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:26.295 18:57:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:26.295 18:57:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:26.295 18:57:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:26.295 18:57:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:26.295 18:57:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.295 1+0 records in 00:05:26.295 1+0 records out 00:05:26.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186244 s, 22.0 MB/s 00:05:26.295 18:57:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.295 18:57:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:26.295 18:57:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.295 18:57:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:26.295 18:57:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:26.295 18:57:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.295 18:57:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.295 18:57:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.295 18:57:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.295 18:57:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.554 18:57:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.554 { 00:05:26.554 "nbd_device": "/dev/nbd0", 00:05:26.554 "bdev_name": "Malloc0" 00:05:26.554 }, 00:05:26.554 { 00:05:26.554 "nbd_device": "/dev/nbd1", 00:05:26.554 "bdev_name": "Malloc1" 00:05:26.554 } 00:05:26.554 ]' 00:05:26.554 18:57:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.554 { 00:05:26.554 "nbd_device": "/dev/nbd0", 00:05:26.554 "bdev_name": "Malloc0" 00:05:26.554 }, 00:05:26.554 { 00:05:26.554 "nbd_device": "/dev/nbd1", 00:05:26.554 "bdev_name": "Malloc1" 00:05:26.554 } 00:05:26.554 ]' 00:05:26.554 18:57:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.554 /dev/nbd1' 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.554 /dev/nbd1' 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.554 256+0 records in 00:05:26.554 256+0 records out 00:05:26.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103529 s, 101 MB/s 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.554 256+0 records in 00:05:26.554 256+0 records out 00:05:26.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137125 s, 76.5 MB/s 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.554 256+0 records in 00:05:26.554 256+0 records out 00:05:26.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145791 s, 71.9 MB/s 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.554 18:57:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.814 18:57:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.814 18:57:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.814 18:57:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.814 18:57:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.814 18:57:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.814 18:57:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.814 18:57:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.814 18:57:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.814 18:57:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.814 18:57:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.074 18:57:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.074 18:57:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.074 18:57:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.074 18:57:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.074 18:57:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.074 18:57:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.074 18:57:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.074 18:57:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.074 18:57:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.074 18:57:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.074 18:57:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.334 18:57:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.334 18:57:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.334 18:57:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.334 18:57:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.334 18:57:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.334 18:57:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.334 18:57:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.334 18:57:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.334 18:57:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.334 18:57:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.334 18:57:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.334 18:57:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.334 18:57:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.593 18:57:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.594 [2024-07-12 18:57:30.104647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.853 [2024-07-12 18:57:30.172742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.853 [2024-07-12 18:57:30.172742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.853 [2024-07-12 18:57:30.213571] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.853 [2024-07-12 18:57:30.213610] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.393 18:57:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:30.393 18:57:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:30.393 spdk_app_start Round 1 00:05:30.393 18:57:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 126918 /var/tmp/spdk-nbd.sock 00:05:30.393 18:57:32 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 126918 ']' 00:05:30.393 18:57:32 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.393 18:57:32 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.393 18:57:32 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.393 18:57:32 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.393 18:57:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.651 18:57:33 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.651 18:57:33 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:30.651 18:57:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.909 Malloc0 00:05:30.909 18:57:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.168 Malloc1 00:05:31.168 18:57:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.168 /dev/nbd0 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.168 18:57:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:31.168 18:57:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:31.168 18:57:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:31.168 18:57:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:31.168 18:57:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:31.168 18:57:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:31.168 18:57:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:31.168 18:57:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:31.168 18:57:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.168 1+0 records in 00:05:31.168 1+0 records out 00:05:31.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184931 s, 22.1 MB/s 00:05:31.168 18:57:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.168 18:57:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:31.168 18:57:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.168 18:57:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:31.168 18:57:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.168 18:57:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.427 /dev/nbd1 00:05:31.427 18:57:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.428 18:57:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.428 18:57:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:31.428 18:57:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:31.428 18:57:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:31.428 18:57:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:31.428 18:57:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:31.428 18:57:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:31.428 18:57:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:31.428 18:57:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:31.428 18:57:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.428 1+0 records in 00:05:31.428 1+0 records out 00:05:31.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174843 s, 23.4 MB/s 00:05:31.428 18:57:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.428 18:57:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:31.428 18:57:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.428 18:57:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:31.428 18:57:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:31.428 18:57:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.428 18:57:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.428 18:57:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.428 18:57:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.428 18:57:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.686 { 00:05:31.686 "nbd_device": "/dev/nbd0", 00:05:31.686 "bdev_name": "Malloc0" 00:05:31.686 }, 00:05:31.686 { 00:05:31.686 "nbd_device": "/dev/nbd1", 00:05:31.686 "bdev_name": "Malloc1" 00:05:31.686 } 00:05:31.686 ]' 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.686 { 00:05:31.686 "nbd_device": "/dev/nbd0", 00:05:31.686 "bdev_name": "Malloc0" 00:05:31.686 }, 00:05:31.686 { 00:05:31.686 "nbd_device": "/dev/nbd1", 00:05:31.686 "bdev_name": "Malloc1" 00:05:31.686 } 00:05:31.686 ]' 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.686 /dev/nbd1' 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.686 /dev/nbd1' 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.686 256+0 records in 00:05:31.686 256+0 records out 00:05:31.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103549 s, 101 MB/s 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.686 256+0 records in 00:05:31.686 256+0 records out 00:05:31.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144961 s, 72.3 MB/s 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.686 256+0 records in 00:05:31.686 256+0 records out 00:05:31.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016303 s, 64.3 MB/s 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.686 18:57:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.945 18:57:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.945 18:57:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.945 18:57:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.945 18:57:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.945 18:57:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.945 18:57:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.945 18:57:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.945 18:57:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.945 18:57:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.945 18:57:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.204 18:57:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.204 18:57:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.204 18:57:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.204 18:57:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.204 18:57:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.204 18:57:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.204 18:57:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.204 18:57:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.204 18:57:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.204 18:57:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.204 18:57:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.464 18:57:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.464 18:57:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.464 18:57:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.464 18:57:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.464 18:57:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.464 18:57:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.464 18:57:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.464 18:57:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.464 18:57:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.464 18:57:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.464 18:57:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.464 18:57:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.464 18:57:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.723 18:57:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.723 [2024-07-12 18:57:35.225070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.983 [2024-07-12 18:57:35.292592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.983 [2024-07-12 18:57:35.292593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.983 [2024-07-12 18:57:35.334233] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.983 [2024-07-12 18:57:35.334272] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.519 18:57:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.519 18:57:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:35.519 spdk_app_start Round 2 00:05:35.519 18:57:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 126918 /var/tmp/spdk-nbd.sock 00:05:35.519 18:57:38 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 126918 ']' 00:05:35.519 18:57:38 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.519 18:57:38 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.519 18:57:38 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.520 18:57:38 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.520 18:57:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.779 18:57:38 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.779 18:57:38 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:35.779 18:57:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.039 Malloc0 00:05:36.039 18:57:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.039 Malloc1 00:05:36.039 18:57:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.039 18:57:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.298 /dev/nbd0 00:05:36.298 18:57:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.298 18:57:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.298 18:57:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:36.298 18:57:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:36.298 18:57:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.298 18:57:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.298 18:57:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:36.298 18:57:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:36.298 18:57:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.298 18:57:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.298 18:57:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.298 1+0 records in 00:05:36.298 1+0 records out 00:05:36.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235834 s, 17.4 MB/s 00:05:36.298 18:57:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.298 18:57:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:36.298 18:57:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.298 18:57:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.298 18:57:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:36.298 18:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.298 18:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.298 18:57:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.558 /dev/nbd1 00:05:36.558 18:57:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.558 18:57:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.558 18:57:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:36.558 18:57:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:36.558 18:57:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.558 18:57:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.558 18:57:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:36.558 18:57:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:36.558 18:57:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.558 18:57:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.559 18:57:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.559 1+0 records in 00:05:36.559 1+0 records out 00:05:36.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224398 s, 18.3 MB/s 00:05:36.559 18:57:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.559 18:57:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:36.559 18:57:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.559 18:57:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.559 18:57:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:36.559 18:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.559 18:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.559 18:57:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.559 18:57:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.559 18:57:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.819 { 00:05:36.819 "nbd_device": "/dev/nbd0", 00:05:36.819 "bdev_name": "Malloc0" 00:05:36.819 }, 00:05:36.819 { 00:05:36.819 "nbd_device": "/dev/nbd1", 00:05:36.819 "bdev_name": "Malloc1" 00:05:36.819 } 00:05:36.819 ]' 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.819 { 00:05:36.819 "nbd_device": "/dev/nbd0", 00:05:36.819 "bdev_name": "Malloc0" 00:05:36.819 }, 00:05:36.819 { 00:05:36.819 "nbd_device": "/dev/nbd1", 00:05:36.819 "bdev_name": "Malloc1" 00:05:36.819 } 00:05:36.819 ]' 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.819 /dev/nbd1' 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.819 /dev/nbd1' 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.819 256+0 records in 00:05:36.819 256+0 records out 00:05:36.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103558 s, 101 MB/s 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.819 256+0 records in 00:05:36.819 256+0 records out 00:05:36.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139285 s, 75.3 MB/s 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.819 256+0 records in 00:05:36.819 256+0 records out 00:05:36.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148152 s, 70.8 MB/s 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.819 18:57:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.078 18:57:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.078 18:57:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.078 18:57:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.078 18:57:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.078 18:57:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.078 18:57:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.078 18:57:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.078 18:57:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.078 18:57:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.078 18:57:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.338 18:57:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.338 18:57:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.338 18:57:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.338 18:57:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.338 18:57:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.338 18:57:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.338 18:57:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.338 18:57:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.338 18:57:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.338 18:57:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.338 18:57:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.338 18:57:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.338 18:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.338 18:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.598 18:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.598 18:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.598 18:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.598 18:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.598 18:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.598 18:57:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.598 18:57:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.598 18:57:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.598 18:57:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.598 18:57:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.598 18:57:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.857 [2024-07-12 18:57:40.303730] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.857 [2024-07-12 18:57:40.372677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.857 [2024-07-12 18:57:40.372678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.857 [2024-07-12 18:57:40.414126] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.857 [2024-07-12 18:57:40.414167] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.147 18:57:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 126918 /var/tmp/spdk-nbd.sock 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 126918 ']' 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:41.147 18:57:43 event.app_repeat -- event/event.sh@39 -- # killprocess 126918 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 126918 ']' 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 126918 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126918 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126918' 00:05:41.147 killing process with pid 126918 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@967 -- # kill 126918 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@972 -- # wait 126918 00:05:41.147 spdk_app_start is called in Round 0. 00:05:41.147 Shutdown signal received, stop current app iteration 00:05:41.147 Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 reinitialization... 00:05:41.147 spdk_app_start is called in Round 1. 00:05:41.147 Shutdown signal received, stop current app iteration 00:05:41.147 Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 reinitialization... 00:05:41.147 spdk_app_start is called in Round 2. 00:05:41.147 Shutdown signal received, stop current app iteration 00:05:41.147 Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 reinitialization... 00:05:41.147 spdk_app_start is called in Round 3. 00:05:41.147 Shutdown signal received, stop current app iteration 00:05:41.147 18:57:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:41.147 18:57:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:41.147 00:05:41.147 real 0m16.379s 00:05:41.147 user 0m35.574s 00:05:41.147 sys 0m2.355s 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.147 18:57:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.147 ************************************ 00:05:41.147 END TEST app_repeat 00:05:41.147 ************************************ 00:05:41.147 18:57:43 event -- common/autotest_common.sh@1142 -- # return 0 00:05:41.147 18:57:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:41.147 18:57:43 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:41.147 18:57:43 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.147 18:57:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.147 18:57:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.147 ************************************ 00:05:41.147 START TEST cpu_locks 00:05:41.147 ************************************ 00:05:41.147 18:57:43 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:41.147 * Looking for test storage... 00:05:41.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:41.147 18:57:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:41.147 18:57:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:41.147 18:57:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:41.147 18:57:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:41.147 18:57:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.147 18:57:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.147 18:57:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.147 ************************************ 00:05:41.147 START TEST default_locks 00:05:41.147 ************************************ 00:05:41.417 18:57:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:41.417 18:57:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=129910 00:05:41.417 18:57:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 129910 00:05:41.417 18:57:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.417 18:57:43 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 129910 ']' 00:05:41.417 18:57:43 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.417 18:57:43 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.417 18:57:43 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.417 18:57:43 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.417 18:57:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.417 [2024-07-12 18:57:43.767615] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:41.417 [2024-07-12 18:57:43.767655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129910 ] 00:05:41.417 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.417 [2024-07-12 18:57:43.831620] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.417 [2024-07-12 18:57:43.904549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.354 18:57:44 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.354 18:57:44 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:42.354 18:57:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 129910 00:05:42.354 18:57:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 129910 00:05:42.354 18:57:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.613 lslocks: write error 00:05:42.613 18:57:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 129910 00:05:42.613 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 129910 ']' 00:05:42.613 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 129910 00:05:42.613 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:42.613 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.613 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 129910 00:05:42.613 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.613 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.613 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 129910' 00:05:42.613 killing process with pid 129910 00:05:42.613 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 129910 00:05:42.613 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 129910 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 129910 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 129910 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 129910 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 129910 ']' 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (129910) - No such process 00:05:42.872 ERROR: process (pid: 129910) is no longer running 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:42.872 00:05:42.872 real 0m1.722s 00:05:42.872 user 0m1.805s 00:05:42.872 sys 0m0.558s 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.872 18:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.872 ************************************ 00:05:42.872 END TEST default_locks 00:05:42.872 ************************************ 00:05:43.131 18:57:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:43.131 18:57:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:43.131 18:57:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.131 18:57:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.131 18:57:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.131 ************************************ 00:05:43.131 START TEST default_locks_via_rpc 00:05:43.131 ************************************ 00:05:43.131 18:57:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:43.131 18:57:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=130185 00:05:43.131 18:57:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 130185 00:05:43.131 18:57:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.131 18:57:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 130185 ']' 00:05:43.131 18:57:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.131 18:57:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.131 18:57:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.131 18:57:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.131 18:57:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.131 [2024-07-12 18:57:45.548489] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:43.131 [2024-07-12 18:57:45.548528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130185 ] 00:05:43.131 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.131 [2024-07-12 18:57:45.616651] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.131 [2024-07-12 18:57:45.696353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 130185 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 130185 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 130185 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 130185 ']' 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 130185 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130185 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130185' 00:05:44.091 killing process with pid 130185 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 130185 00:05:44.091 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 130185 00:05:44.350 00:05:44.350 real 0m1.401s 00:05:44.350 user 0m1.473s 00:05:44.350 sys 0m0.441s 00:05:44.350 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.350 18:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.350 ************************************ 00:05:44.350 END TEST default_locks_via_rpc 00:05:44.350 ************************************ 00:05:44.609 18:57:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:44.609 18:57:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:44.609 18:57:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.609 18:57:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.609 18:57:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.609 ************************************ 00:05:44.609 START TEST non_locking_app_on_locked_coremask 00:05:44.609 ************************************ 00:05:44.609 18:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:44.609 18:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=130497 00:05:44.609 18:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 130497 /var/tmp/spdk.sock 00:05:44.609 18:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.609 18:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 130497 ']' 00:05:44.609 18:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.609 18:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.609 18:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.609 18:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.609 18:57:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.609 [2024-07-12 18:57:47.019609] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:44.609 [2024-07-12 18:57:47.019651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130497 ] 00:05:44.609 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.609 [2024-07-12 18:57:47.085596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.609 [2024-07-12 18:57:47.164576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.564 18:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.564 18:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:45.564 18:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:45.564 18:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=130665 00:05:45.564 18:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 130665 /var/tmp/spdk2.sock 00:05:45.564 18:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 130665 ']' 00:05:45.564 18:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.564 18:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.564 18:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.564 18:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.564 18:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.564 [2024-07-12 18:57:47.853575] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:45.564 [2024-07-12 18:57:47.853618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130665 ] 00:05:45.564 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.564 [2024-07-12 18:57:47.923013] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.564 [2024-07-12 18:57:47.923033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.564 [2024-07-12 18:57:48.072862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.132 18:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.132 18:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:46.132 18:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 130497 00:05:46.132 18:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.132 18:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 130497 00:05:47.069 lslocks: write error 00:05:47.069 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 130497 00:05:47.069 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 130497 ']' 00:05:47.069 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 130497 00:05:47.069 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:47.069 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.069 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130497 00:05:47.069 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.069 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.069 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130497' 00:05:47.069 killing process with pid 130497 00:05:47.069 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 130497 00:05:47.069 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 130497 00:05:47.639 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 130665 00:05:47.639 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 130665 ']' 00:05:47.639 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 130665 00:05:47.639 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:47.639 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.639 18:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130665 00:05:47.639 18:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.639 18:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.639 18:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130665' 00:05:47.639 killing process with pid 130665 00:05:47.639 18:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 130665 00:05:47.639 18:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 130665 00:05:47.899 00:05:47.899 real 0m3.362s 00:05:47.899 user 0m3.600s 00:05:47.899 sys 0m0.938s 00:05:47.899 18:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.899 18:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.899 ************************************ 00:05:47.899 END TEST non_locking_app_on_locked_coremask 00:05:47.899 ************************************ 00:05:47.899 18:57:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:47.899 18:57:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:47.899 18:57:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.899 18:57:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.900 18:57:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.900 ************************************ 00:05:47.900 START TEST locking_app_on_unlocked_coremask 00:05:47.900 ************************************ 00:05:47.900 18:57:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:47.900 18:57:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=131153 00:05:47.900 18:57:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 131153 /var/tmp/spdk.sock 00:05:47.900 18:57:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:47.900 18:57:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 131153 ']' 00:05:47.900 18:57:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.900 18:57:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.900 18:57:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.900 18:57:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.900 18:57:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.900 [2024-07-12 18:57:50.451344] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:47.900 [2024-07-12 18:57:50.451388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131153 ] 00:05:48.159 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.159 [2024-07-12 18:57:50.519079] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.159 [2024-07-12 18:57:50.519105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.159 [2024-07-12 18:57:50.588823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.728 18:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.728 18:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:48.728 18:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=131339 00:05:48.728 18:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 131339 /var/tmp/spdk2.sock 00:05:48.728 18:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:48.728 18:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 131339 ']' 00:05:48.728 18:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.728 18:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.728 18:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.729 18:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.729 18:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.988 [2024-07-12 18:57:51.308864] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:48.988 [2024-07-12 18:57:51.308913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131339 ] 00:05:48.988 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.988 [2024-07-12 18:57:51.385177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.988 [2024-07-12 18:57:51.535309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.558 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.558 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:49.558 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 131339 00:05:49.558 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 131339 00:05:49.558 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.127 lslocks: write error 00:05:50.127 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 131153 00:05:50.127 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 131153 ']' 00:05:50.127 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 131153 00:05:50.127 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:50.127 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.127 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131153 00:05:50.127 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.127 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.127 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131153' 00:05:50.127 killing process with pid 131153 00:05:50.127 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 131153 00:05:50.127 18:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 131153 00:05:50.696 18:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 131339 00:05:50.696 18:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 131339 ']' 00:05:50.696 18:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 131339 00:05:50.696 18:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:50.696 18:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.696 18:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131339 00:05:50.954 18:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.954 18:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.954 18:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131339' 00:05:50.954 killing process with pid 131339 00:05:50.954 18:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 131339 00:05:50.954 18:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 131339 00:05:51.214 00:05:51.214 real 0m3.196s 00:05:51.214 user 0m3.422s 00:05:51.214 sys 0m0.924s 00:05:51.214 18:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.214 18:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.214 ************************************ 00:05:51.214 END TEST locking_app_on_unlocked_coremask 00:05:51.214 ************************************ 00:05:51.214 18:57:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:51.214 18:57:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:51.214 18:57:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.214 18:57:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.214 18:57:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.214 ************************************ 00:05:51.214 START TEST locking_app_on_locked_coremask 00:05:51.214 ************************************ 00:05:51.214 18:57:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:51.214 18:57:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=131660 00:05:51.214 18:57:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 131660 /var/tmp/spdk.sock 00:05:51.214 18:57:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.214 18:57:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 131660 ']' 00:05:51.214 18:57:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.215 18:57:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.215 18:57:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.215 18:57:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.215 18:57:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.215 [2024-07-12 18:57:53.718006] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:51.215 [2024-07-12 18:57:53.718053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131660 ] 00:05:51.215 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.474 [2024-07-12 18:57:53.787510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.474 [2024-07-12 18:57:53.865779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=131891 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 131891 /var/tmp/spdk2.sock 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 131891 /var/tmp/spdk2.sock 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 131891 /var/tmp/spdk2.sock 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 131891 ']' 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.045 18:57:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.045 [2024-07-12 18:57:54.575723] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:52.045 [2024-07-12 18:57:54.575768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131891 ] 00:05:52.045 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.304 [2024-07-12 18:57:54.651970] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 131660 has claimed it. 00:05:52.304 [2024-07-12 18:57:54.652009] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:52.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (131891) - No such process 00:05:52.872 ERROR: process (pid: 131891) is no longer running 00:05:52.872 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.872 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:52.872 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:52.872 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.872 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:52.872 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.872 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 131660 00:05:52.872 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 131660 00:05:52.872 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.130 lslocks: write error 00:05:53.130 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 131660 00:05:53.130 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 131660 ']' 00:05:53.131 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 131660 00:05:53.131 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:53.131 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.131 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131660 00:05:53.131 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.131 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.131 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131660' 00:05:53.131 killing process with pid 131660 00:05:53.131 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 131660 00:05:53.131 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 131660 00:05:53.390 00:05:53.390 real 0m2.151s 00:05:53.390 user 0m2.361s 00:05:53.390 sys 0m0.576s 00:05:53.390 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.390 18:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.390 ************************************ 00:05:53.390 END TEST locking_app_on_locked_coremask 00:05:53.390 ************************************ 00:05:53.390 18:57:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:53.390 18:57:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:53.390 18:57:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.390 18:57:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.390 18:57:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.390 ************************************ 00:05:53.390 START TEST locking_overlapped_coremask 00:05:53.390 ************************************ 00:05:53.390 18:57:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:53.390 18:57:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=132153 00:05:53.390 18:57:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 132153 /var/tmp/spdk.sock 00:05:53.390 18:57:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:53.390 18:57:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 132153 ']' 00:05:53.390 18:57:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.390 18:57:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.390 18:57:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.390 18:57:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.390 18:57:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.390 [2024-07-12 18:57:55.932342] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:53.390 [2024-07-12 18:57:55.932385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132153 ] 00:05:53.390 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.650 [2024-07-12 18:57:55.999785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.650 [2024-07-12 18:57:56.075569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.650 [2024-07-12 18:57:56.075661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.650 [2024-07-12 18:57:56.075662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=132312 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 132312 /var/tmp/spdk2.sock 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 132312 /var/tmp/spdk2.sock 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 132312 /var/tmp/spdk2.sock 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 132312 ']' 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.219 18:57:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.478 [2024-07-12 18:57:56.791814] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:54.478 [2024-07-12 18:57:56.791863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132312 ] 00:05:54.478 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.478 [2024-07-12 18:57:56.873295] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 132153 has claimed it. 00:05:54.478 [2024-07-12 18:57:56.873333] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (132312) - No such process 00:05:55.047 ERROR: process (pid: 132312) is no longer running 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 132153 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 132153 ']' 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 132153 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:55.047 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.048 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132153 00:05:55.048 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.048 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.048 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132153' 00:05:55.048 killing process with pid 132153 00:05:55.048 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 132153 00:05:55.048 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 132153 00:05:55.308 00:05:55.308 real 0m1.900s 00:05:55.308 user 0m5.343s 00:05:55.308 sys 0m0.421s 00:05:55.308 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.308 18:57:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.308 ************************************ 00:05:55.308 END TEST locking_overlapped_coremask 00:05:55.308 ************************************ 00:05:55.308 18:57:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:55.308 18:57:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:55.308 18:57:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.308 18:57:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.308 18:57:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.308 ************************************ 00:05:55.308 START TEST locking_overlapped_coremask_via_rpc 00:05:55.308 ************************************ 00:05:55.308 18:57:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:55.308 18:57:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=132431 00:05:55.308 18:57:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 132431 /var/tmp/spdk.sock 00:05:55.308 18:57:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:55.308 18:57:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 132431 ']' 00:05:55.308 18:57:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.308 18:57:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.308 18:57:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.308 18:57:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.308 18:57:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.568 [2024-07-12 18:57:57.903603] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:55.568 [2024-07-12 18:57:57.903648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132431 ] 00:05:55.568 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.568 [2024-07-12 18:57:57.970383] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.568 [2024-07-12 18:57:57.970408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.568 [2024-07-12 18:57:58.043606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.568 [2024-07-12 18:57:58.043711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.568 [2024-07-12 18:57:58.043712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.138 18:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.138 18:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:56.138 18:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:56.138 18:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=132654 00:05:56.398 18:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 132654 /var/tmp/spdk2.sock 00:05:56.398 18:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 132654 ']' 00:05:56.398 18:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.398 18:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.398 18:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.398 18:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.398 18:57:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.398 [2024-07-12 18:57:58.752989] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:56.398 [2024-07-12 18:57:58.753040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132654 ] 00:05:56.398 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.398 [2024-07-12 18:57:58.828357] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.398 [2024-07-12 18:57:58.828386] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.657 [2024-07-12 18:57:58.978509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.657 [2024-07-12 18:57:58.978624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.657 [2024-07-12 18:57:58.978625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.227 [2024-07-12 18:57:59.570304] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 132431 has claimed it. 00:05:57.227 request: 00:05:57.227 { 00:05:57.227 "method": "framework_enable_cpumask_locks", 00:05:57.227 "req_id": 1 00:05:57.227 } 00:05:57.227 Got JSON-RPC error response 00:05:57.227 response: 00:05:57.227 { 00:05:57.227 "code": -32603, 00:05:57.227 "message": "Failed to claim CPU core: 2" 00:05:57.227 } 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 132431 /var/tmp/spdk.sock 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 132431 ']' 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 132654 /var/tmp/spdk2.sock 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 132654 ']' 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.227 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.487 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.487 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:57.487 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:57.487 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.487 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.487 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.487 00:05:57.487 real 0m2.102s 00:05:57.487 user 0m0.875s 00:05:57.487 sys 0m0.160s 00:05:57.487 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.487 18:57:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.487 ************************************ 00:05:57.487 END TEST locking_overlapped_coremask_via_rpc 00:05:57.487 ************************************ 00:05:57.487 18:57:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:57.487 18:57:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:57.487 18:57:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 132431 ]] 00:05:57.487 18:57:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 132431 00:05:57.487 18:57:59 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 132431 ']' 00:05:57.487 18:57:59 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 132431 00:05:57.487 18:57:59 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:57.487 18:57:59 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.487 18:57:59 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132431 00:05:57.487 18:58:00 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.487 18:58:00 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.487 18:58:00 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132431' 00:05:57.487 killing process with pid 132431 00:05:57.487 18:58:00 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 132431 00:05:57.487 18:58:00 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 132431 00:05:58.056 18:58:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 132654 ]] 00:05:58.056 18:58:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 132654 00:05:58.056 18:58:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 132654 ']' 00:05:58.056 18:58:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 132654 00:05:58.056 18:58:00 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:58.056 18:58:00 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.056 18:58:00 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132654 00:05:58.056 18:58:00 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:58.056 18:58:00 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:58.056 18:58:00 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132654' 00:05:58.056 killing process with pid 132654 00:05:58.056 18:58:00 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 132654 00:05:58.056 18:58:00 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 132654 00:05:58.315 18:58:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.315 18:58:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:58.315 18:58:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 132431 ]] 00:05:58.315 18:58:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 132431 00:05:58.315 18:58:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 132431 ']' 00:05:58.315 18:58:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 132431 00:05:58.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (132431) - No such process 00:05:58.315 18:58:00 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 132431 is not found' 00:05:58.315 Process with pid 132431 is not found 00:05:58.315 18:58:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 132654 ]] 00:05:58.315 18:58:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 132654 00:05:58.315 18:58:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 132654 ']' 00:05:58.315 18:58:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 132654 00:05:58.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (132654) - No such process 00:05:58.315 18:58:00 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 132654 is not found' 00:05:58.315 Process with pid 132654 is not found 00:05:58.315 18:58:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.315 00:05:58.315 real 0m17.141s 00:05:58.315 user 0m29.403s 00:05:58.315 sys 0m4.932s 00:05:58.315 18:58:00 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.315 18:58:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.315 ************************************ 00:05:58.315 END TEST cpu_locks 00:05:58.315 ************************************ 00:05:58.315 18:58:00 event -- common/autotest_common.sh@1142 -- # return 0 00:05:58.315 00:05:58.315 real 0m42.484s 00:05:58.315 user 1m20.847s 00:05:58.315 sys 0m8.257s 00:05:58.315 18:58:00 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.315 18:58:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.315 ************************************ 00:05:58.315 END TEST event 00:05:58.315 ************************************ 00:05:58.315 18:58:00 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.315 18:58:00 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:58.315 18:58:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.315 18:58:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.315 18:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:58.315 ************************************ 00:05:58.316 START TEST thread 00:05:58.316 ************************************ 00:05:58.316 18:58:00 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:58.576 * Looking for test storage... 00:05:58.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:58.576 18:58:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.576 18:58:00 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:58.576 18:58:00 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.576 18:58:00 thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.576 ************************************ 00:05:58.576 START TEST thread_poller_perf 00:05:58.576 ************************************ 00:05:58.576 18:58:00 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.576 [2024-07-12 18:58:00.968618] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:58.576 [2024-07-12 18:58:00.968676] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133199 ] 00:05:58.576 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.576 [2024-07-12 18:58:01.039471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.576 [2024-07-12 18:58:01.112435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.576 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:59.957 ====================================== 00:05:59.957 busy:2307867710 (cyc) 00:05:59.957 total_run_count: 409000 00:05:59.957 tsc_hz: 2300000000 (cyc) 00:05:59.957 ====================================== 00:05:59.957 poller_cost: 5642 (cyc), 2453 (nsec) 00:05:59.957 00:05:59.957 real 0m1.241s 00:05:59.957 user 0m1.155s 00:05:59.957 sys 0m0.083s 00:05:59.957 18:58:02 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.957 18:58:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.957 ************************************ 00:05:59.957 END TEST thread_poller_perf 00:05:59.957 ************************************ 00:05:59.957 18:58:02 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:59.957 18:58:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:59.957 18:58:02 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:59.957 18:58:02 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.957 18:58:02 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.957 ************************************ 00:05:59.957 START TEST thread_poller_perf 00:05:59.957 ************************************ 00:05:59.957 18:58:02 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:59.957 [2024-07-12 18:58:02.278741] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:05:59.957 [2024-07-12 18:58:02.278808] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133400 ] 00:05:59.957 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.957 [2024-07-12 18:58:02.350025] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.957 [2024-07-12 18:58:02.423555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.957 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:01.344 ====================================== 00:06:01.344 busy:2301564562 (cyc) 00:06:01.344 total_run_count: 5380000 00:06:01.344 tsc_hz: 2300000000 (cyc) 00:06:01.344 ====================================== 00:06:01.344 poller_cost: 427 (cyc), 185 (nsec) 00:06:01.344 00:06:01.344 real 0m1.238s 00:06:01.344 user 0m1.152s 00:06:01.344 sys 0m0.082s 00:06:01.344 18:58:03 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.344 18:58:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.344 ************************************ 00:06:01.344 END TEST thread_poller_perf 00:06:01.344 ************************************ 00:06:01.344 18:58:03 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:01.344 18:58:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:01.344 00:06:01.344 real 0m2.702s 00:06:01.344 user 0m2.397s 00:06:01.344 sys 0m0.315s 00:06:01.344 18:58:03 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.344 18:58:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.344 ************************************ 00:06:01.344 END TEST thread 00:06:01.344 ************************************ 00:06:01.344 18:58:03 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.344 18:58:03 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:01.344 18:58:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.344 18:58:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.344 18:58:03 -- common/autotest_common.sh@10 -- # set +x 00:06:01.344 ************************************ 00:06:01.344 START TEST accel 00:06:01.344 ************************************ 00:06:01.344 18:58:03 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:01.344 * Looking for test storage... 00:06:01.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:01.344 18:58:03 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:01.344 18:58:03 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:01.344 18:58:03 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:01.344 18:58:03 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=133721 00:06:01.344 18:58:03 accel -- accel/accel.sh@63 -- # waitforlisten 133721 00:06:01.344 18:58:03 accel -- common/autotest_common.sh@829 -- # '[' -z 133721 ']' 00:06:01.344 18:58:03 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.344 18:58:03 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:01.344 18:58:03 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.344 18:58:03 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:01.344 18:58:03 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.344 18:58:03 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.344 18:58:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.344 18:58:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.344 18:58:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.344 18:58:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.344 18:58:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.344 18:58:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.344 18:58:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:01.344 18:58:03 accel -- accel/accel.sh@41 -- # jq -r . 00:06:01.344 [2024-07-12 18:58:03.744810] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:01.344 [2024-07-12 18:58:03.744862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133721 ] 00:06:01.344 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.344 [2024-07-12 18:58:03.811349] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.344 [2024-07-12 18:58:03.885525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@862 -- # return 0 00:06:02.284 18:58:04 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:02.284 18:58:04 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:02.284 18:58:04 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:02.284 18:58:04 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:02.284 18:58:04 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:02.284 18:58:04 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.284 18:58:04 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.284 18:58:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.284 18:58:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.284 18:58:04 accel -- accel/accel.sh@75 -- # killprocess 133721 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@948 -- # '[' -z 133721 ']' 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@952 -- # kill -0 133721 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@953 -- # uname 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133721 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133721' 00:06:02.284 killing process with pid 133721 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@967 -- # kill 133721 00:06:02.284 18:58:04 accel -- common/autotest_common.sh@972 -- # wait 133721 00:06:02.544 18:58:04 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:02.544 18:58:04 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:02.544 18:58:04 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:02.544 18:58:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.544 18:58:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.544 18:58:04 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:02.544 18:58:04 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:02.544 18:58:04 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:02.544 18:58:04 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.544 18:58:04 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.544 18:58:04 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.544 18:58:04 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.544 18:58:04 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.544 18:58:04 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:02.544 18:58:04 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:02.544 18:58:05 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.544 18:58:05 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:02.544 18:58:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.544 18:58:05 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:02.544 18:58:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:02.544 18:58:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.544 18:58:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.544 ************************************ 00:06:02.544 START TEST accel_missing_filename 00:06:02.544 ************************************ 00:06:02.544 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:02.544 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:02.544 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:02.544 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:02.544 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.544 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:02.544 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.544 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:02.544 18:58:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:02.544 18:58:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:02.544 18:58:05 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.544 18:58:05 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.544 18:58:05 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.544 18:58:05 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.544 18:58:05 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.544 18:58:05 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:02.544 18:58:05 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:02.544 [2024-07-12 18:58:05.094783] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:02.544 [2024-07-12 18:58:05.094832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133990 ] 00:06:02.804 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.804 [2024-07-12 18:58:05.161912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.804 [2024-07-12 18:58:05.233984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.804 [2024-07-12 18:58:05.275086] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.804 [2024-07-12 18:58:05.334976] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:03.064 A filename is required. 00:06:03.064 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:03.064 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.064 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:03.064 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:03.064 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:03.064 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.064 00:06:03.064 real 0m0.341s 00:06:03.064 user 0m0.249s 00:06:03.064 sys 0m0.129s 00:06:03.064 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.064 18:58:05 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:03.064 ************************************ 00:06:03.064 END TEST accel_missing_filename 00:06:03.064 ************************************ 00:06:03.064 18:58:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.064 18:58:05 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:03.064 18:58:05 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:03.064 18:58:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.064 18:58:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.064 ************************************ 00:06:03.064 START TEST accel_compress_verify 00:06:03.064 ************************************ 00:06:03.064 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:03.064 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:03.064 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:03.064 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:03.064 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.064 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:03.064 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.064 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:03.064 18:58:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:03.064 18:58:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:03.064 18:58:05 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.064 18:58:05 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.064 18:58:05 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.064 18:58:05 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.064 18:58:05 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.064 18:58:05 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:03.064 18:58:05 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:03.064 [2024-07-12 18:58:05.502222] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:03.064 [2024-07-12 18:58:05.502296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134039 ] 00:06:03.064 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.064 [2024-07-12 18:58:05.572069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.324 [2024-07-12 18:58:05.648803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.324 [2024-07-12 18:58:05.690038] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.324 [2024-07-12 18:58:05.749161] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:03.324 00:06:03.324 Compression does not support the verify option, aborting. 00:06:03.324 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:03.324 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.324 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:03.324 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:03.324 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:03.324 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.324 00:06:03.324 real 0m0.349s 00:06:03.324 user 0m0.255s 00:06:03.325 sys 0m0.130s 00:06:03.325 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.325 18:58:05 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:03.325 ************************************ 00:06:03.325 END TEST accel_compress_verify 00:06:03.325 ************************************ 00:06:03.325 18:58:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.325 18:58:05 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:03.325 18:58:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:03.325 18:58:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.325 18:58:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.325 ************************************ 00:06:03.325 START TEST accel_wrong_workload 00:06:03.325 ************************************ 00:06:03.325 18:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:03.325 18:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:03.325 18:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:03.325 18:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:03.325 18:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.325 18:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:03.325 18:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.325 18:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:03.325 18:58:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:03.585 18:58:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:03.585 18:58:05 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.585 18:58:05 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.585 18:58:05 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.585 18:58:05 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.585 18:58:05 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.585 18:58:05 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:03.585 18:58:05 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:03.585 Unsupported workload type: foobar 00:06:03.585 [2024-07-12 18:58:05.913986] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:03.585 accel_perf options: 00:06:03.585 [-h help message] 00:06:03.585 [-q queue depth per core] 00:06:03.585 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:03.585 [-T number of threads per core 00:06:03.585 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:03.585 [-t time in seconds] 00:06:03.585 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:03.585 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:03.585 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:03.585 [-l for compress/decompress workloads, name of uncompressed input file 00:06:03.585 [-S for crc32c workload, use this seed value (default 0) 00:06:03.585 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:03.585 [-f for fill workload, use this BYTE value (default 255) 00:06:03.585 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:03.585 [-y verify result if this switch is on] 00:06:03.585 [-a tasks to allocate per core (default: same value as -q)] 00:06:03.585 Can be used to spread operations across a wider range of memory. 00:06:03.585 18:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:03.585 18:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.585 18:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:03.585 18:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.585 00:06:03.585 real 0m0.031s 00:06:03.585 user 0m0.016s 00:06:03.585 sys 0m0.015s 00:06:03.585 18:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.585 18:58:05 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:03.585 ************************************ 00:06:03.585 END TEST accel_wrong_workload 00:06:03.585 ************************************ 00:06:03.585 Error: writing output failed: Broken pipe 00:06:03.585 18:58:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.585 18:58:05 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:03.585 18:58:05 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:03.585 18:58:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.585 18:58:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.585 ************************************ 00:06:03.585 START TEST accel_negative_buffers 00:06:03.585 ************************************ 00:06:03.585 18:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:03.585 18:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:03.585 18:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:03.585 18:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:03.585 18:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.585 18:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:03.585 18:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.585 18:58:05 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:03.585 18:58:05 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:03.585 18:58:05 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:03.585 18:58:05 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.585 18:58:05 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.585 18:58:05 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.585 18:58:05 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.585 18:58:05 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.585 18:58:05 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:03.585 18:58:05 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:03.585 -x option must be non-negative. 00:06:03.586 [2024-07-12 18:58:06.014534] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:03.586 accel_perf options: 00:06:03.586 [-h help message] 00:06:03.586 [-q queue depth per core] 00:06:03.586 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:03.586 [-T number of threads per core 00:06:03.586 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:03.586 [-t time in seconds] 00:06:03.586 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:03.586 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:03.586 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:03.586 [-l for compress/decompress workloads, name of uncompressed input file 00:06:03.586 [-S for crc32c workload, use this seed value (default 0) 00:06:03.586 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:03.586 [-f for fill workload, use this BYTE value (default 255) 00:06:03.586 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:03.586 [-y verify result if this switch is on] 00:06:03.586 [-a tasks to allocate per core (default: same value as -q)] 00:06:03.586 Can be used to spread operations across a wider range of memory. 00:06:03.586 18:58:06 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:03.586 18:58:06 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.586 18:58:06 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:03.586 18:58:06 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.586 00:06:03.586 real 0m0.032s 00:06:03.586 user 0m0.019s 00:06:03.586 sys 0m0.013s 00:06:03.586 18:58:06 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.586 18:58:06 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:03.586 ************************************ 00:06:03.586 END TEST accel_negative_buffers 00:06:03.586 ************************************ 00:06:03.586 Error: writing output failed: Broken pipe 00:06:03.586 18:58:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.586 18:58:06 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:03.586 18:58:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:03.586 18:58:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.586 18:58:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.586 ************************************ 00:06:03.586 START TEST accel_crc32c 00:06:03.586 ************************************ 00:06:03.586 18:58:06 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:03.586 18:58:06 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:03.586 18:58:06 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:03.586 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.586 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.586 18:58:06 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:03.586 18:58:06 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:03.586 18:58:06 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:03.586 18:58:06 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.586 18:58:06 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.586 18:58:06 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.586 18:58:06 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.586 18:58:06 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.586 18:58:06 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:03.586 18:58:06 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:03.586 [2024-07-12 18:58:06.110888] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:03.586 [2024-07-12 18:58:06.110953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134112 ] 00:06:03.586 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.846 [2024-07-12 18:58:06.178282] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.846 [2024-07-12 18:58:06.256118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.846 18:58:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.847 18:58:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.847 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.847 18:58:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:05.228 18:58:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.228 00:06:05.228 real 0m1.354s 00:06:05.228 user 0m1.242s 00:06:05.228 sys 0m0.121s 00:06:05.228 18:58:07 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.228 18:58:07 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:05.228 ************************************ 00:06:05.228 END TEST accel_crc32c 00:06:05.228 ************************************ 00:06:05.228 18:58:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.228 18:58:07 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:05.228 18:58:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:05.228 18:58:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.228 18:58:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.228 ************************************ 00:06:05.228 START TEST accel_crc32c_C2 00:06:05.228 ************************************ 00:06:05.228 18:58:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:05.228 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.228 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:05.228 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.228 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.228 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:05.228 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:05.228 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:05.229 [2024-07-12 18:58:07.531577] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:05.229 [2024-07-12 18:58:07.531627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134362 ] 00:06:05.229 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.229 [2024-07-12 18:58:07.599639] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.229 [2024-07-12 18:58:07.672674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.229 18:58:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.611 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.611 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.611 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.611 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.611 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.611 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.611 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.611 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.611 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.611 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.611 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.611 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.611 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.611 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.612 00:06:06.612 real 0m1.349s 00:06:06.612 user 0m1.239s 00:06:06.612 sys 0m0.123s 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.612 18:58:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:06.612 ************************************ 00:06:06.612 END TEST accel_crc32c_C2 00:06:06.612 ************************************ 00:06:06.612 18:58:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.612 18:58:08 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:06.612 18:58:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:06.612 18:58:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.612 18:58:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.612 ************************************ 00:06:06.612 START TEST accel_copy 00:06:06.612 ************************************ 00:06:06.612 18:58:08 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:06.612 18:58:08 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:06.612 18:58:08 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:06.612 18:58:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:08 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:06.612 18:58:08 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:06.612 18:58:08 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:06.612 18:58:08 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.612 18:58:08 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.612 18:58:08 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.612 18:58:08 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.612 18:58:08 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.612 18:58:08 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:06.612 18:58:08 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:06.612 [2024-07-12 18:58:08.946842] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:06.612 [2024-07-12 18:58:08.946891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134617 ] 00:06:06.612 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.612 [2024-07-12 18:58:09.014034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.612 [2024-07-12 18:58:09.090515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.612 18:58:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:07.993 18:58:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.993 00:06:07.993 real 0m1.351s 00:06:07.993 user 0m1.240s 00:06:07.993 sys 0m0.125s 00:06:07.993 18:58:10 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.993 18:58:10 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:07.993 ************************************ 00:06:07.993 END TEST accel_copy 00:06:07.993 ************************************ 00:06:07.993 18:58:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.993 18:58:10 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.993 18:58:10 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:07.993 18:58:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.993 18:58:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.993 ************************************ 00:06:07.993 START TEST accel_fill 00:06:07.993 ************************************ 00:06:07.993 18:58:10 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:07.993 [2024-07-12 18:58:10.363096] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:07.993 [2024-07-12 18:58:10.363146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134888 ] 00:06:07.993 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.993 [2024-07-12 18:58:10.430992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.993 [2024-07-12 18:58:10.504752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.993 18:58:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.253 18:58:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.253 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.253 18:58:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:09.191 18:58:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.191 00:06:09.191 real 0m1.349s 00:06:09.191 user 0m1.237s 00:06:09.191 sys 0m0.125s 00:06:09.191 18:58:11 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.191 18:58:11 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:09.191 ************************************ 00:06:09.191 END TEST accel_fill 00:06:09.191 ************************************ 00:06:09.191 18:58:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.191 18:58:11 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:09.191 18:58:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:09.191 18:58:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.191 18:58:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.191 ************************************ 00:06:09.191 START TEST accel_copy_crc32c 00:06:09.191 ************************************ 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:09.191 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:09.452 [2024-07-12 18:58:11.778530] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:09.452 [2024-07-12 18:58:11.778598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135157 ] 00:06:09.452 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.452 [2024-07-12 18:58:11.846950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.452 [2024-07-12 18:58:11.924778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.452 18:58:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.833 00:06:10.833 real 0m1.354s 00:06:10.833 user 0m1.249s 00:06:10.833 sys 0m0.120s 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.833 18:58:13 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:10.833 ************************************ 00:06:10.833 END TEST accel_copy_crc32c 00:06:10.833 ************************************ 00:06:10.833 18:58:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.833 18:58:13 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:10.833 18:58:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:10.833 18:58:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.833 18:58:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.833 ************************************ 00:06:10.833 START TEST accel_copy_crc32c_C2 00:06:10.833 ************************************ 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:10.833 [2024-07-12 18:58:13.197528] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:10.833 [2024-07-12 18:58:13.197576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135445 ] 00:06:10.833 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.833 [2024-07-12 18:58:13.263750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.833 [2024-07-12 18:58:13.336614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.833 18:58:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.212 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.213 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.213 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:12.213 18:58:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.213 00:06:12.213 real 0m1.345s 00:06:12.213 user 0m1.239s 00:06:12.213 sys 0m0.119s 00:06:12.213 18:58:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.213 18:58:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:12.213 ************************************ 00:06:12.213 END TEST accel_copy_crc32c_C2 00:06:12.213 ************************************ 00:06:12.213 18:58:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.213 18:58:14 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:12.213 18:58:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:12.213 18:58:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.213 18:58:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.213 ************************************ 00:06:12.213 START TEST accel_dualcast 00:06:12.213 ************************************ 00:06:12.213 18:58:14 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:12.213 18:58:14 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:12.213 18:58:14 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:12.213 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.213 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.213 18:58:14 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:12.213 18:58:14 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:12.213 18:58:14 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:12.213 18:58:14 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.213 18:58:14 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.213 18:58:14 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.213 18:58:14 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.213 18:58:14 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.213 18:58:14 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:12.213 18:58:14 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:12.213 [2024-07-12 18:58:14.607286] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:12.213 [2024-07-12 18:58:14.607349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135720 ] 00:06:12.213 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.213 [2024-07-12 18:58:14.677667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.213 [2024-07-12 18:58:14.749758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.473 18:58:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:13.412 18:58:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.412 00:06:13.412 real 0m1.349s 00:06:13.412 user 0m1.236s 00:06:13.412 sys 0m0.126s 00:06:13.412 18:58:15 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.412 18:58:15 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:13.412 ************************************ 00:06:13.412 END TEST accel_dualcast 00:06:13.412 ************************************ 00:06:13.412 18:58:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.412 18:58:15 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:13.412 18:58:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:13.412 18:58:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.412 18:58:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.672 ************************************ 00:06:13.672 START TEST accel_compare 00:06:13.672 ************************************ 00:06:13.672 18:58:15 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:13.672 18:58:15 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:13.672 18:58:15 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:13.672 18:58:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:15 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:13.672 18:58:15 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:13.672 18:58:15 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:13.672 18:58:15 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.672 18:58:15 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.672 18:58:15 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.672 18:58:15 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.672 18:58:15 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.672 18:58:15 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:13.672 18:58:15 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:13.672 [2024-07-12 18:58:16.019494] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:13.672 [2024-07-12 18:58:16.019541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135985 ] 00:06:13.672 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.672 [2024-07-12 18:58:16.086095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.672 [2024-07-12 18:58:16.157890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.672 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.673 18:58:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:15.053 18:58:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.053 00:06:15.053 real 0m1.345s 00:06:15.053 user 0m1.236s 00:06:15.053 sys 0m0.120s 00:06:15.053 18:58:17 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.053 18:58:17 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:15.053 ************************************ 00:06:15.053 END TEST accel_compare 00:06:15.053 ************************************ 00:06:15.053 18:58:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.053 18:58:17 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:15.053 18:58:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:15.053 18:58:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.053 18:58:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.053 ************************************ 00:06:15.053 START TEST accel_xor 00:06:15.053 ************************************ 00:06:15.053 18:58:17 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:15.053 [2024-07-12 18:58:17.420800] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:15.053 [2024-07-12 18:58:17.420851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136254 ] 00:06:15.053 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.053 [2024-07-12 18:58:17.490250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.053 [2024-07-12 18:58:17.564615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.053 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:15.054 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.054 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.054 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.054 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.054 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.054 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.054 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.054 18:58:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.054 18:58:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.054 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.054 18:58:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.435 00:06:16.435 real 0m1.345s 00:06:16.435 user 0m1.239s 00:06:16.435 sys 0m0.118s 00:06:16.435 18:58:18 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.435 18:58:18 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:16.435 ************************************ 00:06:16.435 END TEST accel_xor 00:06:16.435 ************************************ 00:06:16.435 18:58:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.435 18:58:18 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:16.435 18:58:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:16.435 18:58:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.435 18:58:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.435 ************************************ 00:06:16.435 START TEST accel_xor 00:06:16.435 ************************************ 00:06:16.435 18:58:18 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:16.435 18:58:18 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:16.435 [2024-07-12 18:58:18.835570] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:16.435 [2024-07-12 18:58:18.835629] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136523 ] 00:06:16.435 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.435 [2024-07-12 18:58:18.906509] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.435 [2024-07-12 18:58:18.978478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.694 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.694 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.694 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.694 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.694 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.695 18:58:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:17.633 18:58:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.633 00:06:17.633 real 0m1.354s 00:06:17.633 user 0m1.241s 00:06:17.633 sys 0m0.125s 00:06:17.633 18:58:20 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.633 18:58:20 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:17.633 ************************************ 00:06:17.633 END TEST accel_xor 00:06:17.633 ************************************ 00:06:17.633 18:58:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.633 18:58:20 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:17.633 18:58:20 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:17.633 18:58:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.633 18:58:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.894 ************************************ 00:06:17.894 START TEST accel_dif_verify 00:06:17.894 ************************************ 00:06:17.894 18:58:20 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:17.894 [2024-07-12 18:58:20.255499] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:17.894 [2024-07-12 18:58:20.255565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136801 ] 00:06:17.894 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.894 [2024-07-12 18:58:20.324256] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.894 [2024-07-12 18:58:20.397922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.894 18:58:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:19.278 18:58:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.278 00:06:19.278 real 0m1.351s 00:06:19.278 user 0m1.232s 00:06:19.278 sys 0m0.133s 00:06:19.278 18:58:21 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.278 18:58:21 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:19.278 ************************************ 00:06:19.278 END TEST accel_dif_verify 00:06:19.278 ************************************ 00:06:19.278 18:58:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.278 18:58:21 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:19.278 18:58:21 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:19.278 18:58:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.278 18:58:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.278 ************************************ 00:06:19.278 START TEST accel_dif_generate 00:06:19.278 ************************************ 00:06:19.278 18:58:21 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:19.278 [2024-07-12 18:58:21.671078] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:19.278 [2024-07-12 18:58:21.671145] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137068 ] 00:06:19.278 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.278 [2024-07-12 18:58:21.721875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.278 [2024-07-12 18:58:21.795680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.278 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.539 18:58:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:20.478 18:58:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.478 00:06:20.478 real 0m1.333s 00:06:20.478 user 0m1.240s 00:06:20.478 sys 0m0.109s 00:06:20.478 18:58:22 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.478 18:58:22 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:20.478 ************************************ 00:06:20.478 END TEST accel_dif_generate 00:06:20.478 ************************************ 00:06:20.478 18:58:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.478 18:58:23 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:20.478 18:58:23 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:20.478 18:58:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.478 18:58:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.478 ************************************ 00:06:20.478 START TEST accel_dif_generate_copy 00:06:20.478 ************************************ 00:06:20.478 18:58:23 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:20.478 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:20.478 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:20.478 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.478 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:20.478 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.478 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:20.478 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:20.478 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.478 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.478 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.478 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.478 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:20.737 [2024-07-12 18:58:23.063926] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:20.737 [2024-07-12 18:58:23.063983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137315 ] 00:06:20.737 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.737 [2024-07-12 18:58:23.133409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.737 [2024-07-12 18:58:23.205706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.737 18:58:23 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.117 00:06:22.117 real 0m1.346s 00:06:22.117 user 0m1.239s 00:06:22.117 sys 0m0.120s 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.117 18:58:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:22.117 ************************************ 00:06:22.117 END TEST accel_dif_generate_copy 00:06:22.117 ************************************ 00:06:22.117 18:58:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.117 18:58:24 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:22.117 18:58:24 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.117 18:58:24 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:22.117 18:58:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.117 18:58:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.117 ************************************ 00:06:22.117 START TEST accel_comp 00:06:22.117 ************************************ 00:06:22.117 18:58:24 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:22.117 [2024-07-12 18:58:24.480587] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:22.117 [2024-07-12 18:58:24.480656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137573 ] 00:06:22.117 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.117 [2024-07-12 18:58:24.548235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.117 [2024-07-12 18:58:24.619137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.117 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.118 18:58:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.497 18:58:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.498 18:58:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.498 18:58:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:23.498 18:58:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.498 00:06:23.498 real 0m1.348s 00:06:23.498 user 0m1.239s 00:06:23.498 sys 0m0.122s 00:06:23.498 18:58:25 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.498 18:58:25 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:23.498 ************************************ 00:06:23.498 END TEST accel_comp 00:06:23.498 ************************************ 00:06:23.498 18:58:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.498 18:58:25 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.498 18:58:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:23.498 18:58:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.498 18:58:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.498 ************************************ 00:06:23.498 START TEST accel_decomp 00:06:23.498 ************************************ 00:06:23.498 18:58:25 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.498 18:58:25 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:23.498 18:58:25 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:23.498 18:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.498 18:58:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.498 18:58:25 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.498 18:58:25 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.498 18:58:25 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:23.498 18:58:25 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.498 18:58:25 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.498 18:58:25 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.498 18:58:25 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.498 18:58:25 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.498 18:58:25 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:23.498 18:58:25 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:23.498 [2024-07-12 18:58:25.887289] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:23.498 [2024-07-12 18:58:25.887334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137818 ] 00:06:23.498 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.498 [2024-07-12 18:58:25.953822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.498 [2024-07-12 18:58:26.025071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.758 18:58:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.698 18:58:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.699 18:58:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.699 18:58:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.699 18:58:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.699 18:58:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.699 18:58:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.699 18:58:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.699 18:58:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:24.699 18:58:27 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.699 00:06:24.699 real 0m1.349s 00:06:24.699 user 0m1.235s 00:06:24.699 sys 0m0.128s 00:06:24.699 18:58:27 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.699 18:58:27 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:24.699 ************************************ 00:06:24.699 END TEST accel_decomp 00:06:24.699 ************************************ 00:06:24.699 18:58:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.699 18:58:27 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:24.699 18:58:27 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:24.699 18:58:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.699 18:58:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.959 ************************************ 00:06:24.959 START TEST accel_decomp_full 00:06:24.959 ************************************ 00:06:24.959 18:58:27 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:24.959 [2024-07-12 18:58:27.297496] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:24.959 [2024-07-12 18:58:27.297552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138066 ] 00:06:24.959 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.959 [2024-07-12 18:58:27.367726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.959 [2024-07-12 18:58:27.440592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.959 18:58:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.342 18:58:28 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:26.343 18:58:28 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.343 00:06:26.343 real 0m1.355s 00:06:26.343 user 0m1.247s 00:06:26.343 sys 0m0.122s 00:06:26.343 18:58:28 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.343 18:58:28 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:26.343 ************************************ 00:06:26.343 END TEST accel_decomp_full 00:06:26.343 ************************************ 00:06:26.343 18:58:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.343 18:58:28 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.343 18:58:28 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:26.343 18:58:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.343 18:58:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.343 ************************************ 00:06:26.343 START TEST accel_decomp_mcore 00:06:26.343 ************************************ 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:26.343 [2024-07-12 18:58:28.715889] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:26.343 [2024-07-12 18:58:28.715947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138320 ] 00:06:26.343 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.343 [2024-07-12 18:58:28.785453] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.343 [2024-07-12 18:58:28.860258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.343 [2024-07-12 18:58:28.860323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.343 [2024-07-12 18:58:28.860427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.343 [2024-07-12 18:58:28.860428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.343 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.603 18:58:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.541 00:06:27.541 real 0m1.361s 00:06:27.541 user 0m4.580s 00:06:27.541 sys 0m0.126s 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.541 18:58:30 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:27.541 ************************************ 00:06:27.541 END TEST accel_decomp_mcore 00:06:27.541 ************************************ 00:06:27.541 18:58:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.541 18:58:30 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:27.541 18:58:30 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:27.541 18:58:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.541 18:58:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.802 ************************************ 00:06:27.802 START TEST accel_decomp_full_mcore 00:06:27.802 ************************************ 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:27.802 [2024-07-12 18:58:30.144418] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:27.802 [2024-07-12 18:58:30.144468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138568 ] 00:06:27.802 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.802 [2024-07-12 18:58:30.211388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.802 [2024-07-12 18:58:30.286009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.802 [2024-07-12 18:58:30.286220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.802 [2024-07-12 18:58:30.286116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.802 [2024-07-12 18:58:30.286221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.802 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.803 18:58:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.184 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.184 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.184 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.184 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.184 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.184 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.184 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.184 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.185 00:06:29.185 real 0m1.372s 00:06:29.185 user 0m4.622s 00:06:29.185 sys 0m0.131s 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.185 18:58:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:29.185 ************************************ 00:06:29.185 END TEST accel_decomp_full_mcore 00:06:29.185 ************************************ 00:06:29.185 18:58:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.185 18:58:31 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.185 18:58:31 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:29.185 18:58:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.185 18:58:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.185 ************************************ 00:06:29.185 START TEST accel_decomp_mthread 00:06:29.185 ************************************ 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:29.185 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:29.185 [2024-07-12 18:58:31.580017] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:29.185 [2024-07-12 18:58:31.580065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138824 ] 00:06:29.185 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.185 [2024-07-12 18:58:31.646347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.185 [2024-07-12 18:58:31.719191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:29.445 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.446 18:58:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.385 00:06:30.385 real 0m1.352s 00:06:30.385 user 0m1.244s 00:06:30.385 sys 0m0.122s 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.385 18:58:32 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:30.385 ************************************ 00:06:30.385 END TEST accel_decomp_mthread 00:06:30.385 ************************************ 00:06:30.385 18:58:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.385 18:58:32 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:30.385 18:58:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:30.385 18:58:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.385 18:58:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.644 ************************************ 00:06:30.644 START TEST accel_decomp_full_mthread 00:06:30.644 ************************************ 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:30.644 18:58:32 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:30.644 [2024-07-12 18:58:32.995681] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:30.644 [2024-07-12 18:58:32.995727] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139072 ] 00:06:30.644 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.644 [2024-07-12 18:58:33.061529] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.644 [2024-07-12 18:58:33.133047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.644 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:30.645 18:58:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.025 00:06:32.025 real 0m1.373s 00:06:32.025 user 0m1.261s 00:06:32.025 sys 0m0.124s 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.025 18:58:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:32.025 ************************************ 00:06:32.025 END TEST accel_decomp_full_mthread 00:06:32.025 ************************************ 00:06:32.025 18:58:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.025 18:58:34 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:32.025 18:58:34 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:32.026 18:58:34 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:32.026 18:58:34 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:32.026 18:58:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.026 18:58:34 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.026 18:58:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.026 18:58:34 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.026 18:58:34 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.026 18:58:34 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.026 18:58:34 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.026 18:58:34 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:32.026 18:58:34 accel -- accel/accel.sh@41 -- # jq -r . 00:06:32.026 ************************************ 00:06:32.026 START TEST accel_dif_functional_tests 00:06:32.026 ************************************ 00:06:32.026 18:58:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:32.026 [2024-07-12 18:58:34.449419] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:32.026 [2024-07-12 18:58:34.449457] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139321 ] 00:06:32.026 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.026 [2024-07-12 18:58:34.516047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.026 [2024-07-12 18:58:34.589193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.026 [2024-07-12 18:58:34.589310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.026 [2024-07-12 18:58:34.589310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.285 00:06:32.285 00:06:32.285 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.285 http://cunit.sourceforge.net/ 00:06:32.285 00:06:32.285 00:06:32.285 Suite: accel_dif 00:06:32.285 Test: verify: DIF generated, GUARD check ...passed 00:06:32.285 Test: verify: DIF generated, APPTAG check ...passed 00:06:32.285 Test: verify: DIF generated, REFTAG check ...passed 00:06:32.285 Test: verify: DIF not generated, GUARD check ...[2024-07-12 18:58:34.656398] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:32.285 passed 00:06:32.285 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 18:58:34.656455] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:32.285 passed 00:06:32.285 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 18:58:34.656489] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:32.285 passed 00:06:32.285 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:32.285 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 18:58:34.656530] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:32.285 passed 00:06:32.285 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:32.285 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:32.285 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:32.285 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 18:58:34.656630] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:32.285 passed 00:06:32.285 Test: verify copy: DIF generated, GUARD check ...passed 00:06:32.285 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:32.285 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:32.285 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 18:58:34.656735] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:32.285 passed 00:06:32.285 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 18:58:34.656756] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:32.285 passed 00:06:32.285 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 18:58:34.656776] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:32.285 passed 00:06:32.285 Test: generate copy: DIF generated, GUARD check ...passed 00:06:32.285 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:32.285 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:32.285 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:32.285 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:32.285 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:32.285 Test: generate copy: iovecs-len validate ...[2024-07-12 18:58:34.656935] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:32.285 passed 00:06:32.285 Test: generate copy: buffer alignment validate ...passed 00:06:32.285 00:06:32.285 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.285 suites 1 1 n/a 0 0 00:06:32.285 tests 26 26 26 0 0 00:06:32.285 asserts 115 115 115 0 n/a 00:06:32.285 00:06:32.285 Elapsed time = 0.000 seconds 00:06:32.285 00:06:32.285 real 0m0.418s 00:06:32.285 user 0m0.617s 00:06:32.285 sys 0m0.152s 00:06:32.285 18:58:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.285 18:58:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:32.285 ************************************ 00:06:32.285 END TEST accel_dif_functional_tests 00:06:32.285 ************************************ 00:06:32.545 18:58:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.545 00:06:32.545 real 0m31.257s 00:06:32.545 user 0m34.906s 00:06:32.545 sys 0m4.387s 00:06:32.545 18:58:34 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.545 18:58:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.545 ************************************ 00:06:32.545 END TEST accel 00:06:32.545 ************************************ 00:06:32.545 18:58:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:32.545 18:58:34 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:32.545 18:58:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.545 18:58:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.545 18:58:34 -- common/autotest_common.sh@10 -- # set +x 00:06:32.545 ************************************ 00:06:32.545 START TEST accel_rpc 00:06:32.545 ************************************ 00:06:32.545 18:58:34 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:32.545 * Looking for test storage... 00:06:32.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:32.545 18:58:35 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:32.545 18:58:35 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=139396 00:06:32.545 18:58:35 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 139396 00:06:32.545 18:58:35 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:32.545 18:58:35 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 139396 ']' 00:06:32.545 18:58:35 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.545 18:58:35 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.545 18:58:35 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.545 18:58:35 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.545 18:58:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.545 [2024-07-12 18:58:35.068162] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:32.546 [2024-07-12 18:58:35.068208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139396 ] 00:06:32.546 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.805 [2024-07-12 18:58:35.135086] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.805 [2024-07-12 18:58:35.213491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.375 18:58:35 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.375 18:58:35 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:33.375 18:58:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:33.375 18:58:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:33.375 18:58:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:33.375 18:58:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:33.375 18:58:35 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:33.375 18:58:35 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.375 18:58:35 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.375 18:58:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.375 ************************************ 00:06:33.375 START TEST accel_assign_opcode 00:06:33.375 ************************************ 00:06:33.375 18:58:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:33.375 18:58:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:33.375 18:58:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.375 18:58:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.375 [2024-07-12 18:58:35.891475] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:33.375 18:58:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.375 18:58:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:33.375 18:58:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.375 18:58:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.375 [2024-07-12 18:58:35.899490] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:33.375 18:58:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.375 18:58:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:33.375 18:58:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.375 18:58:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.635 18:58:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.635 18:58:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:33.635 18:58:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:33.635 18:58:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.635 18:58:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.635 18:58:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:33.635 18:58:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.635 software 00:06:33.635 00:06:33.635 real 0m0.238s 00:06:33.635 user 0m0.048s 00:06:33.635 sys 0m0.009s 00:06:33.635 18:58:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.635 18:58:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.635 ************************************ 00:06:33.635 END TEST accel_assign_opcode 00:06:33.635 ************************************ 00:06:33.635 18:58:36 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:33.635 18:58:36 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 139396 00:06:33.635 18:58:36 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 139396 ']' 00:06:33.635 18:58:36 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 139396 00:06:33.635 18:58:36 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:33.635 18:58:36 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.635 18:58:36 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 139396 00:06:33.635 18:58:36 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.635 18:58:36 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.635 18:58:36 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 139396' 00:06:33.635 killing process with pid 139396 00:06:33.635 18:58:36 accel_rpc -- common/autotest_common.sh@967 -- # kill 139396 00:06:33.635 18:58:36 accel_rpc -- common/autotest_common.sh@972 -- # wait 139396 00:06:34.204 00:06:34.204 real 0m1.576s 00:06:34.204 user 0m1.630s 00:06:34.204 sys 0m0.435s 00:06:34.205 18:58:36 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.205 18:58:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.205 ************************************ 00:06:34.205 END TEST accel_rpc 00:06:34.205 ************************************ 00:06:34.205 18:58:36 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.205 18:58:36 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:34.205 18:58:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.205 18:58:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.205 18:58:36 -- common/autotest_common.sh@10 -- # set +x 00:06:34.205 ************************************ 00:06:34.205 START TEST app_cmdline 00:06:34.205 ************************************ 00:06:34.205 18:58:36 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:34.205 * Looking for test storage... 00:06:34.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:34.205 18:58:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:34.205 18:58:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=139823 00:06:34.205 18:58:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 139823 00:06:34.205 18:58:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:34.205 18:58:36 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 139823 ']' 00:06:34.205 18:58:36 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.205 18:58:36 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.205 18:58:36 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.205 18:58:36 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.205 18:58:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.205 [2024-07-12 18:58:36.710751] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:06:34.205 [2024-07-12 18:58:36.710806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139823 ] 00:06:34.205 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.465 [2024-07-12 18:58:36.777543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.465 [2024-07-12 18:58:36.857250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.034 18:58:37 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.034 18:58:37 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:35.034 18:58:37 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:35.293 { 00:06:35.293 "version": "SPDK v24.09-pre git sha1 5f33ec93a", 00:06:35.293 "fields": { 00:06:35.293 "major": 24, 00:06:35.293 "minor": 9, 00:06:35.293 "patch": 0, 00:06:35.293 "suffix": "-pre", 00:06:35.293 "commit": "5f33ec93a" 00:06:35.293 } 00:06:35.293 } 00:06:35.293 18:58:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:35.293 18:58:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:35.293 18:58:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:35.293 18:58:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:35.293 18:58:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:35.293 18:58:37 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.293 18:58:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.293 18:58:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:35.293 18:58:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:35.293 18:58:37 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.293 18:58:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:35.293 18:58:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:35.293 18:58:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.293 18:58:37 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:35.293 18:58:37 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.293 18:58:37 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.293 18:58:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.293 18:58:37 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.293 18:58:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.293 18:58:37 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.293 18:58:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.293 18:58:37 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.293 18:58:37 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:35.293 18:58:37 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.553 request: 00:06:35.553 { 00:06:35.553 "method": "env_dpdk_get_mem_stats", 00:06:35.553 "req_id": 1 00:06:35.553 } 00:06:35.553 Got JSON-RPC error response 00:06:35.553 response: 00:06:35.553 { 00:06:35.553 "code": -32601, 00:06:35.553 "message": "Method not found" 00:06:35.553 } 00:06:35.553 18:58:37 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:35.553 18:58:37 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.553 18:58:37 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:35.553 18:58:37 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.553 18:58:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 139823 00:06:35.553 18:58:37 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 139823 ']' 00:06:35.553 18:58:37 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 139823 00:06:35.553 18:58:37 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:35.553 18:58:37 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.553 18:58:37 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 139823 00:06:35.553 18:58:37 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.553 18:58:37 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.553 18:58:37 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 139823' 00:06:35.553 killing process with pid 139823 00:06:35.553 18:58:37 app_cmdline -- common/autotest_common.sh@967 -- # kill 139823 00:06:35.553 18:58:37 app_cmdline -- common/autotest_common.sh@972 -- # wait 139823 00:06:35.813 00:06:35.813 real 0m1.675s 00:06:35.813 user 0m1.990s 00:06:35.813 sys 0m0.431s 00:06:35.813 18:58:38 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.813 18:58:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.813 ************************************ 00:06:35.813 END TEST app_cmdline 00:06:35.813 ************************************ 00:06:35.813 18:58:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:35.813 18:58:38 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:35.813 18:58:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.813 18:58:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.813 18:58:38 -- common/autotest_common.sh@10 -- # set +x 00:06:35.813 ************************************ 00:06:35.813 START TEST version 00:06:35.813 ************************************ 00:06:35.813 18:58:38 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:36.074 * Looking for test storage... 00:06:36.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:36.074 18:58:38 version -- app/version.sh@17 -- # get_header_version major 00:06:36.074 18:58:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.074 18:58:38 version -- app/version.sh@14 -- # cut -f2 00:06:36.074 18:58:38 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.074 18:58:38 version -- app/version.sh@17 -- # major=24 00:06:36.074 18:58:38 version -- app/version.sh@18 -- # get_header_version minor 00:06:36.074 18:58:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.074 18:58:38 version -- app/version.sh@14 -- # cut -f2 00:06:36.074 18:58:38 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.074 18:58:38 version -- app/version.sh@18 -- # minor=9 00:06:36.074 18:58:38 version -- app/version.sh@19 -- # get_header_version patch 00:06:36.074 18:58:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.074 18:58:38 version -- app/version.sh@14 -- # cut -f2 00:06:36.074 18:58:38 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.074 18:58:38 version -- app/version.sh@19 -- # patch=0 00:06:36.074 18:58:38 version -- app/version.sh@20 -- # get_header_version suffix 00:06:36.074 18:58:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.074 18:58:38 version -- app/version.sh@14 -- # cut -f2 00:06:36.074 18:58:38 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.074 18:58:38 version -- app/version.sh@20 -- # suffix=-pre 00:06:36.074 18:58:38 version -- app/version.sh@22 -- # version=24.9 00:06:36.074 18:58:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:36.074 18:58:38 version -- app/version.sh@28 -- # version=24.9rc0 00:06:36.074 18:58:38 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:36.074 18:58:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:36.074 18:58:38 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:36.074 18:58:38 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:36.074 00:06:36.074 real 0m0.155s 00:06:36.074 user 0m0.079s 00:06:36.074 sys 0m0.113s 00:06:36.074 18:58:38 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.074 18:58:38 version -- common/autotest_common.sh@10 -- # set +x 00:06:36.074 ************************************ 00:06:36.074 END TEST version 00:06:36.074 ************************************ 00:06:36.074 18:58:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:36.074 18:58:38 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:36.074 18:58:38 -- spdk/autotest.sh@198 -- # uname -s 00:06:36.074 18:58:38 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:36.074 18:58:38 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:36.074 18:58:38 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:36.074 18:58:38 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:36.074 18:58:38 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:36.074 18:58:38 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:36.074 18:58:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:36.074 18:58:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.074 18:58:38 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:36.074 18:58:38 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:36.074 18:58:38 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:36.074 18:58:38 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:36.074 18:58:38 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:36.074 18:58:38 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:36.074 18:58:38 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:36.074 18:58:38 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:36.074 18:58:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.074 18:58:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.074 ************************************ 00:06:36.074 START TEST nvmf_tcp 00:06:36.074 ************************************ 00:06:36.074 18:58:38 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:36.334 * Looking for test storage... 00:06:36.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.334 18:58:38 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.334 18:58:38 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.334 18:58:38 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.334 18:58:38 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.334 18:58:38 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.334 18:58:38 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.334 18:58:38 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:36.334 18:58:38 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:36.334 18:58:38 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.334 18:58:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:36.334 18:58:38 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:36.334 18:58:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:36.334 18:58:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.334 18:58:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.335 ************************************ 00:06:36.335 START TEST nvmf_example 00:06:36.335 ************************************ 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:36.335 * Looking for test storage... 00:06:36.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:36.335 18:58:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.910 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.910 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:42.910 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:42.910 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:42.910 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:42.910 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:42.910 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:42.910 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:42.910 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:42.910 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:42.911 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:42.911 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:42.911 Found net devices under 0000:86:00.0: cvl_0_0 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:42.911 Found net devices under 0000:86:00.1: cvl_0_1 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:42.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:06:42.911 00:06:42.911 --- 10.0.0.2 ping statistics --- 00:06:42.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.911 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:06:42.911 00:06:42.911 --- 10.0.0.1 ping statistics --- 00:06:42.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.911 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=143319 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 143319 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 143319 ']' 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:42.911 18:58:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.911 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.171 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.171 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:43.171 18:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:43.171 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:43.171 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.171 18:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:43.171 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.171 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.171 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:43.172 18:58:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:43.172 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.386 Initializing NVMe Controllers 00:06:55.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:55.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:55.386 Initialization complete. Launching workers. 00:06:55.386 ======================================================== 00:06:55.386 Latency(us) 00:06:55.386 Device Information : IOPS MiB/s Average min max 00:06:55.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18299.27 71.48 3499.61 529.02 16333.95 00:06:55.386 ======================================================== 00:06:55.386 Total : 18299.27 71.48 3499.61 529.02 16333.95 00:06:55.386 00:06:55.386 18:58:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:55.386 18:58:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:55.386 18:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:55.386 18:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:55.386 18:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:55.386 18:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:55.386 18:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:55.386 18:58:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:55.386 rmmod nvme_tcp 00:06:55.386 rmmod nvme_fabrics 00:06:55.386 rmmod nvme_keyring 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 143319 ']' 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 143319 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 143319 ']' 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 143319 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 143319 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 143319' 00:06:55.386 killing process with pid 143319 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 143319 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 143319 00:06:55.386 nvmf threads initialize successfully 00:06:55.386 bdev subsystem init successfully 00:06:55.386 created a nvmf target service 00:06:55.386 create targets's poll groups done 00:06:55.386 all subsystems of target started 00:06:55.386 nvmf target is running 00:06:55.386 all subsystems of target stopped 00:06:55.386 destroy targets's poll groups done 00:06:55.386 destroyed the nvmf target service 00:06:55.386 bdev subsystem finish successfully 00:06:55.386 nvmf threads destroy successfully 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.386 18:58:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.958 18:58:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:55.958 18:58:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:55.958 18:58:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.958 18:58:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.958 00:06:55.958 real 0m19.604s 00:06:55.958 user 0m46.248s 00:06:55.958 sys 0m5.711s 00:06:55.958 18:58:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.958 18:58:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.958 ************************************ 00:06:55.958 END TEST nvmf_example 00:06:55.958 ************************************ 00:06:55.958 18:58:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:55.958 18:58:58 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:55.958 18:58:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:55.958 18:58:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.958 18:58:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.958 ************************************ 00:06:55.958 START TEST nvmf_filesystem 00:06:55.958 ************************************ 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:55.958 * Looking for test storage... 00:06:55.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:55.958 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:55.959 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:56.222 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:56.223 #define SPDK_CONFIG_H 00:06:56.223 #define SPDK_CONFIG_APPS 1 00:06:56.223 #define SPDK_CONFIG_ARCH native 00:06:56.223 #undef SPDK_CONFIG_ASAN 00:06:56.223 #undef SPDK_CONFIG_AVAHI 00:06:56.223 #undef SPDK_CONFIG_CET 00:06:56.223 #define SPDK_CONFIG_COVERAGE 1 00:06:56.223 #define SPDK_CONFIG_CROSS_PREFIX 00:06:56.223 #undef SPDK_CONFIG_CRYPTO 00:06:56.223 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:56.223 #undef SPDK_CONFIG_CUSTOMOCF 00:06:56.223 #undef SPDK_CONFIG_DAOS 00:06:56.223 #define SPDK_CONFIG_DAOS_DIR 00:06:56.223 #define SPDK_CONFIG_DEBUG 1 00:06:56.223 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:56.223 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:56.223 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:56.223 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:56.223 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:56.223 #undef SPDK_CONFIG_DPDK_UADK 00:06:56.223 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:56.223 #define SPDK_CONFIG_EXAMPLES 1 00:06:56.223 #undef SPDK_CONFIG_FC 00:06:56.223 #define SPDK_CONFIG_FC_PATH 00:06:56.223 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:56.223 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:56.223 #undef SPDK_CONFIG_FUSE 00:06:56.223 #undef SPDK_CONFIG_FUZZER 00:06:56.223 #define SPDK_CONFIG_FUZZER_LIB 00:06:56.223 #undef SPDK_CONFIG_GOLANG 00:06:56.223 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:56.223 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:56.223 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:56.223 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:56.223 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:56.223 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:56.223 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:56.223 #define SPDK_CONFIG_IDXD 1 00:06:56.223 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:56.223 #undef SPDK_CONFIG_IPSEC_MB 00:06:56.223 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:56.223 #define SPDK_CONFIG_ISAL 1 00:06:56.223 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:56.223 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:56.223 #define SPDK_CONFIG_LIBDIR 00:06:56.223 #undef SPDK_CONFIG_LTO 00:06:56.223 #define SPDK_CONFIG_MAX_LCORES 128 00:06:56.223 #define SPDK_CONFIG_NVME_CUSE 1 00:06:56.223 #undef SPDK_CONFIG_OCF 00:06:56.223 #define SPDK_CONFIG_OCF_PATH 00:06:56.223 #define SPDK_CONFIG_OPENSSL_PATH 00:06:56.223 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:56.223 #define SPDK_CONFIG_PGO_DIR 00:06:56.223 #undef SPDK_CONFIG_PGO_USE 00:06:56.223 #define SPDK_CONFIG_PREFIX /usr/local 00:06:56.223 #undef SPDK_CONFIG_RAID5F 00:06:56.223 #undef SPDK_CONFIG_RBD 00:06:56.223 #define SPDK_CONFIG_RDMA 1 00:06:56.223 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:56.223 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:56.223 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:56.223 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:56.223 #define SPDK_CONFIG_SHARED 1 00:06:56.223 #undef SPDK_CONFIG_SMA 00:06:56.223 #define SPDK_CONFIG_TESTS 1 00:06:56.223 #undef SPDK_CONFIG_TSAN 00:06:56.223 #define SPDK_CONFIG_UBLK 1 00:06:56.223 #define SPDK_CONFIG_UBSAN 1 00:06:56.223 #undef SPDK_CONFIG_UNIT_TESTS 00:06:56.223 #undef SPDK_CONFIG_URING 00:06:56.223 #define SPDK_CONFIG_URING_PATH 00:06:56.223 #undef SPDK_CONFIG_URING_ZNS 00:06:56.223 #undef SPDK_CONFIG_USDT 00:06:56.223 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:56.223 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:56.223 #define SPDK_CONFIG_VFIO_USER 1 00:06:56.223 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:56.223 #define SPDK_CONFIG_VHOST 1 00:06:56.223 #define SPDK_CONFIG_VIRTIO 1 00:06:56.223 #undef SPDK_CONFIG_VTUNE 00:06:56.223 #define SPDK_CONFIG_VTUNE_DIR 00:06:56.223 #define SPDK_CONFIG_WERROR 1 00:06:56.223 #define SPDK_CONFIG_WPDK_DIR 00:06:56.223 #undef SPDK_CONFIG_XNVME 00:06:56.223 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:56.223 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:56.224 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 145726 ]] 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 145726 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.hjnfiI 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.hjnfiI/tests/target /tmp/spdk.hjnfiI 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=190634913792 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974299648 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5339385856 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983774720 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185489920 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9371648 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986842624 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=307200 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:56.225 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:56.225 * Looking for test storage... 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=190634913792 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7553978368 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:56.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:56.226 18:58:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:02.804 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:02.804 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:02.804 Found net devices under 0000:86:00.0: cvl_0_0 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:02.804 Found net devices under 0000:86:00.1: cvl_0_1 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.804 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:02.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:07:02.805 00:07:02.805 --- 10.0.0.2 ping statistics --- 00:07:02.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.805 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:07:02.805 00:07:02.805 --- 10.0.0.1 ping statistics --- 00:07:02.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.805 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.805 ************************************ 00:07:02.805 START TEST nvmf_filesystem_no_in_capsule 00:07:02.805 ************************************ 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=148882 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 148882 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 148882 ']' 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.805 18:59:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:02.805 [2024-07-12 18:59:04.581580] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:07:02.805 [2024-07-12 18:59:04.581628] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.805 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.805 [2024-07-12 18:59:04.653835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.805 [2024-07-12 18:59:04.736449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.805 [2024-07-12 18:59:04.736486] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.805 [2024-07-12 18:59:04.736493] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.805 [2024-07-12 18:59:04.736499] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.805 [2024-07-12 18:59:04.736504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.805 [2024-07-12 18:59:04.736553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.805 [2024-07-12 18:59:04.736663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.805 [2024-07-12 18:59:04.736764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.805 [2024-07-12 18:59:04.736765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.065 [2024-07-12 18:59:05.438308] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.065 Malloc1 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.065 [2024-07-12 18:59:05.590327] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.065 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:03.065 { 00:07:03.065 "name": "Malloc1", 00:07:03.065 "aliases": [ 00:07:03.065 "1fd01347-5c7e-4b07-97c6-4097c38c1bf4" 00:07:03.065 ], 00:07:03.065 "product_name": "Malloc disk", 00:07:03.065 "block_size": 512, 00:07:03.065 "num_blocks": 1048576, 00:07:03.065 "uuid": "1fd01347-5c7e-4b07-97c6-4097c38c1bf4", 00:07:03.065 "assigned_rate_limits": { 00:07:03.065 "rw_ios_per_sec": 0, 00:07:03.065 "rw_mbytes_per_sec": 0, 00:07:03.065 "r_mbytes_per_sec": 0, 00:07:03.065 "w_mbytes_per_sec": 0 00:07:03.065 }, 00:07:03.065 "claimed": true, 00:07:03.065 "claim_type": "exclusive_write", 00:07:03.065 "zoned": false, 00:07:03.065 "supported_io_types": { 00:07:03.065 "read": true, 00:07:03.065 "write": true, 00:07:03.065 "unmap": true, 00:07:03.066 "flush": true, 00:07:03.066 "reset": true, 00:07:03.066 "nvme_admin": false, 00:07:03.066 "nvme_io": false, 00:07:03.066 "nvme_io_md": false, 00:07:03.066 "write_zeroes": true, 00:07:03.066 "zcopy": true, 00:07:03.066 "get_zone_info": false, 00:07:03.066 "zone_management": false, 00:07:03.066 "zone_append": false, 00:07:03.066 "compare": false, 00:07:03.066 "compare_and_write": false, 00:07:03.066 "abort": true, 00:07:03.066 "seek_hole": false, 00:07:03.066 "seek_data": false, 00:07:03.066 "copy": true, 00:07:03.066 "nvme_iov_md": false 00:07:03.066 }, 00:07:03.066 "memory_domains": [ 00:07:03.066 { 00:07:03.066 "dma_device_id": "system", 00:07:03.066 "dma_device_type": 1 00:07:03.066 }, 00:07:03.066 { 00:07:03.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.066 "dma_device_type": 2 00:07:03.066 } 00:07:03.066 ], 00:07:03.066 "driver_specific": {} 00:07:03.066 } 00:07:03.066 ]' 00:07:03.066 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:03.325 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:03.325 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:03.325 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:03.325 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:03.325 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:03.325 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:03.325 18:59:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:04.705 18:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:04.705 18:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:04.705 18:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:04.705 18:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:04.705 18:59:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:06.611 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:06.611 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:06.612 18:59:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:06.870 18:59:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.251 ************************************ 00:07:08.251 START TEST filesystem_ext4 00:07:08.251 ************************************ 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:08.251 18:59:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:08.251 mke2fs 1.46.5 (30-Dec-2021) 00:07:08.251 Discarding device blocks: 0/522240 done 00:07:08.251 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:08.251 Filesystem UUID: 56581400-ecf0-42cf-8d8e-b15eb5519fcf 00:07:08.251 Superblock backups stored on blocks: 00:07:08.251 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:08.251 00:07:08.251 Allocating group tables: 0/64 done 00:07:08.251 Writing inode tables: 0/64 done 00:07:10.789 Creating journal (8192 blocks): done 00:07:11.359 Writing superblocks and filesystem accounting information: 0/64 done 00:07:11.359 00:07:11.359 18:59:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:11.359 18:59:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:11.927 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:11.927 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:11.927 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:11.928 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:11.928 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:11.928 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:11.928 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 148882 00:07:11.928 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:11.928 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:11.928 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:11.928 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:12.187 00:07:12.187 real 0m4.079s 00:07:12.187 user 0m0.026s 00:07:12.187 sys 0m0.064s 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:12.187 ************************************ 00:07:12.187 END TEST filesystem_ext4 00:07:12.187 ************************************ 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.187 ************************************ 00:07:12.187 START TEST filesystem_btrfs 00:07:12.187 ************************************ 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:12.187 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:12.447 btrfs-progs v6.6.2 00:07:12.447 See https://btrfs.readthedocs.io for more information. 00:07:12.447 00:07:12.447 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:12.447 NOTE: several default settings have changed in version 5.15, please make sure 00:07:12.447 this does not affect your deployments: 00:07:12.447 - DUP for metadata (-m dup) 00:07:12.447 - enabled no-holes (-O no-holes) 00:07:12.447 - enabled free-space-tree (-R free-space-tree) 00:07:12.447 00:07:12.447 Label: (null) 00:07:12.447 UUID: bd956385-cd67-4208-932d-1d03001520a3 00:07:12.447 Node size: 16384 00:07:12.447 Sector size: 4096 00:07:12.447 Filesystem size: 510.00MiB 00:07:12.447 Block group profiles: 00:07:12.447 Data: single 8.00MiB 00:07:12.447 Metadata: DUP 32.00MiB 00:07:12.447 System: DUP 8.00MiB 00:07:12.447 SSD detected: yes 00:07:12.447 Zoned device: no 00:07:12.447 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:12.447 Runtime features: free-space-tree 00:07:12.447 Checksum: crc32c 00:07:12.447 Number of devices: 1 00:07:12.447 Devices: 00:07:12.447 ID SIZE PATH 00:07:12.447 1 510.00MiB /dev/nvme0n1p1 00:07:12.447 00:07:12.447 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:12.447 18:59:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 148882 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:13.384 00:07:13.384 real 0m1.098s 00:07:13.384 user 0m0.017s 00:07:13.384 sys 0m0.178s 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:13.384 ************************************ 00:07:13.384 END TEST filesystem_btrfs 00:07:13.384 ************************************ 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.384 ************************************ 00:07:13.384 START TEST filesystem_xfs 00:07:13.384 ************************************ 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:13.384 18:59:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:13.384 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:13.384 = sectsz=512 attr=2, projid32bit=1 00:07:13.384 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:13.384 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:13.384 data = bsize=4096 blocks=130560, imaxpct=25 00:07:13.384 = sunit=0 swidth=0 blks 00:07:13.384 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:13.384 log =internal log bsize=4096 blocks=16384, version=2 00:07:13.384 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:13.384 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:14.321 Discarding blocks...Done. 00:07:14.321 18:59:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:14.321 18:59:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 148882 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:17.611 00:07:17.611 real 0m4.055s 00:07:17.611 user 0m0.029s 00:07:17.611 sys 0m0.111s 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:17.611 ************************************ 00:07:17.611 END TEST filesystem_xfs 00:07:17.611 ************************************ 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:17.611 18:59:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:17.611 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:18.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 148882 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 148882 ']' 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 148882 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 148882 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 148882' 00:07:18.180 killing process with pid 148882 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 148882 00:07:18.180 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 148882 00:07:18.440 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:18.440 00:07:18.440 real 0m16.457s 00:07:18.440 user 1m4.766s 00:07:18.440 sys 0m1.381s 00:07:18.440 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.440 18:59:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.440 ************************************ 00:07:18.440 END TEST nvmf_filesystem_no_in_capsule 00:07:18.440 ************************************ 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.701 ************************************ 00:07:18.701 START TEST nvmf_filesystem_in_capsule 00:07:18.701 ************************************ 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=151777 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 151777 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 151777 ']' 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.701 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.701 [2024-07-12 18:59:21.104268] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:07:18.701 [2024-07-12 18:59:21.104306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.701 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.701 [2024-07-12 18:59:21.175416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.701 [2024-07-12 18:59:21.256350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.701 [2024-07-12 18:59:21.256385] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.701 [2024-07-12 18:59:21.256392] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.701 [2024-07-12 18:59:21.256399] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.701 [2024-07-12 18:59:21.256404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.701 [2024-07-12 18:59:21.256458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.701 [2024-07-12 18:59:21.256565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.701 [2024-07-12 18:59:21.256666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.701 [2024-07-12 18:59:21.256666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.641 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.641 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:19.641 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:19.641 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:19.641 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.641 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.641 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:19.641 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:19.641 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.641 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.641 [2024-07-12 18:59:21.957230] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.641 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.641 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:19.641 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.641 18:59:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.641 Malloc1 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.641 [2024-07-12 18:59:22.097133] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:19.641 { 00:07:19.641 "name": "Malloc1", 00:07:19.641 "aliases": [ 00:07:19.641 "1801aab2-5082-4cca-a83b-f0d49175761c" 00:07:19.641 ], 00:07:19.641 "product_name": "Malloc disk", 00:07:19.641 "block_size": 512, 00:07:19.641 "num_blocks": 1048576, 00:07:19.641 "uuid": "1801aab2-5082-4cca-a83b-f0d49175761c", 00:07:19.641 "assigned_rate_limits": { 00:07:19.641 "rw_ios_per_sec": 0, 00:07:19.641 "rw_mbytes_per_sec": 0, 00:07:19.641 "r_mbytes_per_sec": 0, 00:07:19.641 "w_mbytes_per_sec": 0 00:07:19.641 }, 00:07:19.641 "claimed": true, 00:07:19.641 "claim_type": "exclusive_write", 00:07:19.641 "zoned": false, 00:07:19.641 "supported_io_types": { 00:07:19.641 "read": true, 00:07:19.641 "write": true, 00:07:19.641 "unmap": true, 00:07:19.641 "flush": true, 00:07:19.641 "reset": true, 00:07:19.641 "nvme_admin": false, 00:07:19.641 "nvme_io": false, 00:07:19.641 "nvme_io_md": false, 00:07:19.641 "write_zeroes": true, 00:07:19.641 "zcopy": true, 00:07:19.641 "get_zone_info": false, 00:07:19.641 "zone_management": false, 00:07:19.641 "zone_append": false, 00:07:19.641 "compare": false, 00:07:19.641 "compare_and_write": false, 00:07:19.641 "abort": true, 00:07:19.641 "seek_hole": false, 00:07:19.641 "seek_data": false, 00:07:19.641 "copy": true, 00:07:19.641 "nvme_iov_md": false 00:07:19.641 }, 00:07:19.641 "memory_domains": [ 00:07:19.641 { 00:07:19.641 "dma_device_id": "system", 00:07:19.641 "dma_device_type": 1 00:07:19.641 }, 00:07:19.641 { 00:07:19.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.641 "dma_device_type": 2 00:07:19.641 } 00:07:19.641 ], 00:07:19.641 "driver_specific": {} 00:07:19.641 } 00:07:19.641 ]' 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:19.641 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:19.902 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:19.902 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:19.902 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:19.902 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:19.902 18:59:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.848 18:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.848 18:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:20.848 18:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.848 18:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:20.848 18:59:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:22.755 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:22.755 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:22.755 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:22.755 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:22.756 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:23.015 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:23.015 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:23.015 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:23.015 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:23.015 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:23.015 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:23.015 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:23.015 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:23.015 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:23.015 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:23.015 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:23.015 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:23.015 18:59:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:23.950 18:59:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.887 ************************************ 00:07:24.887 START TEST filesystem_in_capsule_ext4 00:07:24.887 ************************************ 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:24.887 18:59:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:24.887 mke2fs 1.46.5 (30-Dec-2021) 00:07:25.146 Discarding device blocks: 0/522240 done 00:07:25.146 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:25.146 Filesystem UUID: 77005ab2-54e3-451a-a155-a10af86a1db6 00:07:25.146 Superblock backups stored on blocks: 00:07:25.146 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:25.146 00:07:25.146 Allocating group tables: 0/64 done 00:07:25.146 Writing inode tables: 0/64 done 00:07:25.146 Creating journal (8192 blocks): done 00:07:26.185 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:07:26.185 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 151777 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:26.185 00:07:26.185 real 0m1.326s 00:07:26.185 user 0m0.026s 00:07:26.185 sys 0m0.063s 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.185 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:26.185 ************************************ 00:07:26.185 END TEST filesystem_in_capsule_ext4 00:07:26.185 ************************************ 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.445 ************************************ 00:07:26.445 START TEST filesystem_in_capsule_btrfs 00:07:26.445 ************************************ 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:26.445 18:59:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:26.445 btrfs-progs v6.6.2 00:07:26.445 See https://btrfs.readthedocs.io for more information. 00:07:26.445 00:07:26.445 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:26.445 NOTE: several default settings have changed in version 5.15, please make sure 00:07:26.445 this does not affect your deployments: 00:07:26.445 - DUP for metadata (-m dup) 00:07:26.445 - enabled no-holes (-O no-holes) 00:07:26.445 - enabled free-space-tree (-R free-space-tree) 00:07:26.445 00:07:26.445 Label: (null) 00:07:26.445 UUID: 4b287b1b-64f9-4728-a82a-31de7c7a40eb 00:07:26.445 Node size: 16384 00:07:26.445 Sector size: 4096 00:07:26.445 Filesystem size: 510.00MiB 00:07:26.445 Block group profiles: 00:07:26.445 Data: single 8.00MiB 00:07:26.445 Metadata: DUP 32.00MiB 00:07:26.445 System: DUP 8.00MiB 00:07:26.445 SSD detected: yes 00:07:26.445 Zoned device: no 00:07:26.445 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:26.445 Runtime features: free-space-tree 00:07:26.445 Checksum: crc32c 00:07:26.445 Number of devices: 1 00:07:26.445 Devices: 00:07:26.445 ID SIZE PATH 00:07:26.445 1 510.00MiB /dev/nvme0n1p1 00:07:26.445 00:07:26.445 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:26.445 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:26.704 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:26.704 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:26.704 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 151777 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:26.964 00:07:26.964 real 0m0.504s 00:07:26.964 user 0m0.026s 00:07:26.964 sys 0m0.123s 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:26.964 ************************************ 00:07:26.964 END TEST filesystem_in_capsule_btrfs 00:07:26.964 ************************************ 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.964 ************************************ 00:07:26.964 START TEST filesystem_in_capsule_xfs 00:07:26.964 ************************************ 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:26.964 18:59:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:26.964 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:26.964 = sectsz=512 attr=2, projid32bit=1 00:07:26.964 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:26.964 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:26.964 data = bsize=4096 blocks=130560, imaxpct=25 00:07:26.964 = sunit=0 swidth=0 blks 00:07:26.964 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:26.964 log =internal log bsize=4096 blocks=16384, version=2 00:07:26.964 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:26.964 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:27.903 Discarding blocks...Done. 00:07:27.903 18:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:27.903 18:59:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 151777 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.440 00:07:30.440 real 0m3.205s 00:07:30.440 user 0m0.018s 00:07:30.440 sys 0m0.076s 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:30.440 ************************************ 00:07:30.440 END TEST filesystem_in_capsule_xfs 00:07:30.440 ************************************ 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:30.440 18:59:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:30.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.699 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:30.699 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:30.699 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:30.699 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:30.699 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:30.699 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:30.699 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:30.699 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:30.699 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.700 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.700 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.700 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:30.700 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 151777 00:07:30.700 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 151777 ']' 00:07:30.700 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 151777 00:07:30.700 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:30.700 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.700 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 151777 00:07:30.700 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:30.700 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:30.700 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 151777' 00:07:30.700 killing process with pid 151777 00:07:30.700 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 151777 00:07:30.700 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 151777 00:07:30.959 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:30.959 00:07:30.959 real 0m12.412s 00:07:30.959 user 0m48.713s 00:07:30.959 sys 0m1.232s 00:07:30.959 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.959 18:59:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.959 ************************************ 00:07:30.959 END TEST nvmf_filesystem_in_capsule 00:07:30.959 ************************************ 00:07:30.959 18:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:30.959 18:59:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:30.959 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:30.959 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:30.959 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:30.959 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:30.959 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:30.959 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:30.959 rmmod nvme_tcp 00:07:30.959 rmmod nvme_fabrics 00:07:31.220 rmmod nvme_keyring 00:07:31.220 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.220 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:31.220 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:31.220 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:31.220 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:31.220 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:31.220 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:31.220 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.220 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:31.220 18:59:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.220 18:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.220 18:59:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.130 18:59:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:33.131 00:07:33.131 real 0m37.223s 00:07:33.131 user 1m55.317s 00:07:33.131 sys 0m7.129s 00:07:33.131 18:59:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.131 18:59:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.131 ************************************ 00:07:33.131 END TEST nvmf_filesystem 00:07:33.131 ************************************ 00:07:33.131 18:59:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:33.131 18:59:35 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:33.131 18:59:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.131 18:59:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.131 18:59:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.391 ************************************ 00:07:33.391 START TEST nvmf_target_discovery 00:07:33.391 ************************************ 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:33.391 * Looking for test storage... 00:07:33.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:33.391 18:59:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.972 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:39.973 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:39.973 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:39.973 Found net devices under 0000:86:00.0: cvl_0_0 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:39.973 Found net devices under 0000:86:00.1: cvl_0_1 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:39.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:07:39.973 00:07:39.973 --- 10.0.0.2 ping statistics --- 00:07:39.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.973 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:07:39.973 00:07:39.973 --- 10.0.0.1 ping statistics --- 00:07:39.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.973 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=157565 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 157565 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 157565 ']' 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.973 18:59:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.973 [2024-07-12 18:59:41.659050] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:07:39.973 [2024-07-12 18:59:41.659097] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.973 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.973 [2024-07-12 18:59:41.729573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.973 [2024-07-12 18:59:41.802900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.973 [2024-07-12 18:59:41.802939] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.974 [2024-07-12 18:59:41.802946] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.974 [2024-07-12 18:59:41.802952] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.974 [2024-07-12 18:59:41.802956] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.974 [2024-07-12 18:59:41.803069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.974 [2024-07-12 18:59:41.803193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.974 [2024-07-12 18:59:41.803278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.974 [2024-07-12 18:59:41.803279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:39.974 [2024-07-12 18:59:42.507295] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.974 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.234 Null1 00:07:40.234 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.234 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:40.234 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.234 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 [2024-07-12 18:59:42.564879] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 Null2 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 Null3 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 Null4 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.235 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:40.494 00:07:40.494 Discovery Log Number of Records 6, Generation counter 6 00:07:40.494 =====Discovery Log Entry 0====== 00:07:40.494 trtype: tcp 00:07:40.494 adrfam: ipv4 00:07:40.494 subtype: current discovery subsystem 00:07:40.494 treq: not required 00:07:40.494 portid: 0 00:07:40.494 trsvcid: 4420 00:07:40.494 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:40.494 traddr: 10.0.0.2 00:07:40.494 eflags: explicit discovery connections, duplicate discovery information 00:07:40.494 sectype: none 00:07:40.494 =====Discovery Log Entry 1====== 00:07:40.494 trtype: tcp 00:07:40.494 adrfam: ipv4 00:07:40.494 subtype: nvme subsystem 00:07:40.494 treq: not required 00:07:40.494 portid: 0 00:07:40.494 trsvcid: 4420 00:07:40.494 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:40.494 traddr: 10.0.0.2 00:07:40.494 eflags: none 00:07:40.494 sectype: none 00:07:40.494 =====Discovery Log Entry 2====== 00:07:40.494 trtype: tcp 00:07:40.494 adrfam: ipv4 00:07:40.494 subtype: nvme subsystem 00:07:40.494 treq: not required 00:07:40.494 portid: 0 00:07:40.494 trsvcid: 4420 00:07:40.494 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:40.494 traddr: 10.0.0.2 00:07:40.494 eflags: none 00:07:40.494 sectype: none 00:07:40.494 =====Discovery Log Entry 3====== 00:07:40.494 trtype: tcp 00:07:40.494 adrfam: ipv4 00:07:40.494 subtype: nvme subsystem 00:07:40.494 treq: not required 00:07:40.494 portid: 0 00:07:40.494 trsvcid: 4420 00:07:40.494 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:40.494 traddr: 10.0.0.2 00:07:40.494 eflags: none 00:07:40.494 sectype: none 00:07:40.494 =====Discovery Log Entry 4====== 00:07:40.494 trtype: tcp 00:07:40.494 adrfam: ipv4 00:07:40.494 subtype: nvme subsystem 00:07:40.494 treq: not required 00:07:40.494 portid: 0 00:07:40.494 trsvcid: 4420 00:07:40.494 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:40.494 traddr: 10.0.0.2 00:07:40.494 eflags: none 00:07:40.494 sectype: none 00:07:40.494 =====Discovery Log Entry 5====== 00:07:40.494 trtype: tcp 00:07:40.494 adrfam: ipv4 00:07:40.494 subtype: discovery subsystem referral 00:07:40.494 treq: not required 00:07:40.494 portid: 0 00:07:40.494 trsvcid: 4430 00:07:40.494 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:40.494 traddr: 10.0.0.2 00:07:40.494 eflags: none 00:07:40.494 sectype: none 00:07:40.494 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:40.494 Perform nvmf subsystem discovery via RPC 00:07:40.494 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:40.494 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.494 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.494 [ 00:07:40.494 { 00:07:40.494 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:40.494 "subtype": "Discovery", 00:07:40.494 "listen_addresses": [ 00:07:40.494 { 00:07:40.494 "trtype": "TCP", 00:07:40.494 "adrfam": "IPv4", 00:07:40.494 "traddr": "10.0.0.2", 00:07:40.494 "trsvcid": "4420" 00:07:40.494 } 00:07:40.494 ], 00:07:40.494 "allow_any_host": true, 00:07:40.494 "hosts": [] 00:07:40.494 }, 00:07:40.494 { 00:07:40.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:40.494 "subtype": "NVMe", 00:07:40.494 "listen_addresses": [ 00:07:40.494 { 00:07:40.494 "trtype": "TCP", 00:07:40.494 "adrfam": "IPv4", 00:07:40.494 "traddr": "10.0.0.2", 00:07:40.494 "trsvcid": "4420" 00:07:40.494 } 00:07:40.494 ], 00:07:40.494 "allow_any_host": true, 00:07:40.494 "hosts": [], 00:07:40.494 "serial_number": "SPDK00000000000001", 00:07:40.494 "model_number": "SPDK bdev Controller", 00:07:40.494 "max_namespaces": 32, 00:07:40.494 "min_cntlid": 1, 00:07:40.494 "max_cntlid": 65519, 00:07:40.494 "namespaces": [ 00:07:40.494 { 00:07:40.494 "nsid": 1, 00:07:40.494 "bdev_name": "Null1", 00:07:40.494 "name": "Null1", 00:07:40.494 "nguid": "163F59D9C28A43DF918C3070887F064B", 00:07:40.494 "uuid": "163f59d9-c28a-43df-918c-3070887f064b" 00:07:40.494 } 00:07:40.494 ] 00:07:40.494 }, 00:07:40.494 { 00:07:40.494 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:40.494 "subtype": "NVMe", 00:07:40.494 "listen_addresses": [ 00:07:40.494 { 00:07:40.494 "trtype": "TCP", 00:07:40.494 "adrfam": "IPv4", 00:07:40.494 "traddr": "10.0.0.2", 00:07:40.494 "trsvcid": "4420" 00:07:40.494 } 00:07:40.494 ], 00:07:40.494 "allow_any_host": true, 00:07:40.494 "hosts": [], 00:07:40.494 "serial_number": "SPDK00000000000002", 00:07:40.494 "model_number": "SPDK bdev Controller", 00:07:40.494 "max_namespaces": 32, 00:07:40.494 "min_cntlid": 1, 00:07:40.494 "max_cntlid": 65519, 00:07:40.494 "namespaces": [ 00:07:40.494 { 00:07:40.494 "nsid": 1, 00:07:40.494 "bdev_name": "Null2", 00:07:40.494 "name": "Null2", 00:07:40.494 "nguid": "B9F7B7A143B04743A1AEC96A1448E78E", 00:07:40.494 "uuid": "b9f7b7a1-43b0-4743-a1ae-c96a1448e78e" 00:07:40.494 } 00:07:40.494 ] 00:07:40.494 }, 00:07:40.494 { 00:07:40.494 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:40.494 "subtype": "NVMe", 00:07:40.494 "listen_addresses": [ 00:07:40.494 { 00:07:40.494 "trtype": "TCP", 00:07:40.494 "adrfam": "IPv4", 00:07:40.494 "traddr": "10.0.0.2", 00:07:40.494 "trsvcid": "4420" 00:07:40.494 } 00:07:40.494 ], 00:07:40.494 "allow_any_host": true, 00:07:40.494 "hosts": [], 00:07:40.494 "serial_number": "SPDK00000000000003", 00:07:40.494 "model_number": "SPDK bdev Controller", 00:07:40.494 "max_namespaces": 32, 00:07:40.494 "min_cntlid": 1, 00:07:40.494 "max_cntlid": 65519, 00:07:40.494 "namespaces": [ 00:07:40.494 { 00:07:40.494 "nsid": 1, 00:07:40.494 "bdev_name": "Null3", 00:07:40.494 "name": "Null3", 00:07:40.494 "nguid": "B4E45F880FB041E0A899ED4D9F59CF9F", 00:07:40.494 "uuid": "b4e45f88-0fb0-41e0-a899-ed4d9f59cf9f" 00:07:40.494 } 00:07:40.494 ] 00:07:40.494 }, 00:07:40.494 { 00:07:40.494 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:40.494 "subtype": "NVMe", 00:07:40.494 "listen_addresses": [ 00:07:40.494 { 00:07:40.494 "trtype": "TCP", 00:07:40.494 "adrfam": "IPv4", 00:07:40.494 "traddr": "10.0.0.2", 00:07:40.494 "trsvcid": "4420" 00:07:40.494 } 00:07:40.494 ], 00:07:40.494 "allow_any_host": true, 00:07:40.494 "hosts": [], 00:07:40.494 "serial_number": "SPDK00000000000004", 00:07:40.494 "model_number": "SPDK bdev Controller", 00:07:40.494 "max_namespaces": 32, 00:07:40.494 "min_cntlid": 1, 00:07:40.494 "max_cntlid": 65519, 00:07:40.494 "namespaces": [ 00:07:40.494 { 00:07:40.494 "nsid": 1, 00:07:40.494 "bdev_name": "Null4", 00:07:40.494 "name": "Null4", 00:07:40.494 "nguid": "C1E7CA8D9C1D47AB85123F2FA29D3C63", 00:07:40.494 "uuid": "c1e7ca8d-9c1d-47ab-8512-3f2fa29d3c63" 00:07:40.494 } 00:07:40.494 ] 00:07:40.494 } 00:07:40.494 ] 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.495 18:59:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.495 18:59:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.495 18:59:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:40.495 18:59:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:40.495 18:59:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:40.495 18:59:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:40.495 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:40.495 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:40.495 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:40.495 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:40.495 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:40.495 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:40.495 rmmod nvme_tcp 00:07:40.495 rmmod nvme_fabrics 00:07:40.753 rmmod nvme_keyring 00:07:40.753 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:40.753 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:40.753 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:40.753 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 157565 ']' 00:07:40.753 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 157565 00:07:40.753 18:59:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 157565 ']' 00:07:40.753 18:59:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 157565 00:07:40.753 18:59:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:40.753 18:59:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:40.753 18:59:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 157565 00:07:40.753 18:59:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:40.753 18:59:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:40.754 18:59:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 157565' 00:07:40.754 killing process with pid 157565 00:07:40.754 18:59:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 157565 00:07:40.754 18:59:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 157565 00:07:41.013 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:41.013 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:41.013 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:41.013 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:41.013 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:41.013 18:59:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.013 18:59:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.013 18:59:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.922 18:59:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:42.922 00:07:42.922 real 0m9.692s 00:07:42.922 user 0m7.929s 00:07:42.922 sys 0m4.647s 00:07:42.922 18:59:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.922 18:59:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.922 ************************************ 00:07:42.922 END TEST nvmf_target_discovery 00:07:42.922 ************************************ 00:07:42.922 18:59:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:42.922 18:59:45 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:42.922 18:59:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:42.922 18:59:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.922 18:59:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.922 ************************************ 00:07:42.922 START TEST nvmf_referrals 00:07:42.922 ************************************ 00:07:42.922 18:59:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:43.182 * Looking for test storage... 00:07:43.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.182 18:59:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.759 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:49.760 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:49.760 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:49.760 Found net devices under 0000:86:00.0: cvl_0_0 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:49.760 Found net devices under 0000:86:00.1: cvl_0_1 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:49.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:07:49.760 00:07:49.760 --- 10.0.0.2 ping statistics --- 00:07:49.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.760 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:07:49.760 00:07:49.760 --- 10.0.0.1 ping statistics --- 00:07:49.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.760 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=161358 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 161358 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 161358 ']' 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.760 18:59:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.760 [2024-07-12 18:59:51.467941] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:07:49.760 [2024-07-12 18:59:51.467983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.760 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.760 [2024-07-12 18:59:51.535927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.760 [2024-07-12 18:59:51.616021] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.760 [2024-07-12 18:59:51.616055] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.760 [2024-07-12 18:59:51.616062] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.760 [2024-07-12 18:59:51.616068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.760 [2024-07-12 18:59:51.616074] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.760 [2024-07-12 18:59:51.616117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.760 [2024-07-12 18:59:51.616241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.760 [2024-07-12 18:59:51.616332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.760 [2024-07-12 18:59:51.616333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.760 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.760 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:49.760 18:59:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.760 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:49.760 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.760 18:59:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.760 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:49.760 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.760 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:49.761 [2024-07-12 18:59:52.325188] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.020 [2024-07-12 18:59:52.338586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:50.020 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:50.021 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:50.281 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:50.540 18:59:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:50.540 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:50.540 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:50.540 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:50.540 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:50.540 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:50.540 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:50.540 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:50.799 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:50.799 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:50.799 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:50.799 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:50.799 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:50.799 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:50.799 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:50.799 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:50.799 18:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.799 18:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.058 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.317 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:51.576 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:51.576 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:51.576 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:51.576 18:59:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:51.576 18:59:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:51.576 18:59:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:51.576 18:59:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:51.576 18:59:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:51.577 18:59:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:51.577 18:59:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:51.577 rmmod nvme_tcp 00:07:51.577 rmmod nvme_fabrics 00:07:51.577 rmmod nvme_keyring 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 161358 ']' 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 161358 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 161358 ']' 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 161358 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 161358 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 161358' 00:07:51.577 killing process with pid 161358 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 161358 00:07:51.577 18:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 161358 00:07:51.836 18:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:51.836 18:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:51.836 18:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:51.836 18:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.836 18:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:51.836 18:59:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.836 18:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.836 18:59:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.741 18:59:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:53.741 00:07:53.741 real 0m10.827s 00:07:53.741 user 0m12.933s 00:07:53.741 sys 0m5.016s 00:07:53.741 18:59:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.741 18:59:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.741 ************************************ 00:07:53.741 END TEST nvmf_referrals 00:07:53.741 ************************************ 00:07:53.999 18:59:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:53.999 18:59:56 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:53.999 18:59:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:53.999 18:59:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.999 18:59:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.999 ************************************ 00:07:53.999 START TEST nvmf_connect_disconnect 00:07:53.999 ************************************ 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:54.000 * Looking for test storage... 00:07:54.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:54.000 18:59:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.575 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.575 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:00.575 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:00.575 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:00.575 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:00.576 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:00.576 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:00.576 Found net devices under 0000:86:00.0: cvl_0_0 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:00.576 Found net devices under 0000:86:00.1: cvl_0_1 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:00.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:08:00.576 00:08:00.576 --- 10.0.0.2 ping statistics --- 00:08:00.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.576 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:08:00.576 00:08:00.576 --- 10.0.0.1 ping statistics --- 00:08:00.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.576 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=165565 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 165565 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 165565 ']' 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.576 19:00:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.576 [2024-07-12 19:00:02.344186] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:08:00.576 [2024-07-12 19:00:02.344240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.576 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.576 [2024-07-12 19:00:02.413748] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.576 [2024-07-12 19:00:02.487873] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.576 [2024-07-12 19:00:02.487914] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.576 [2024-07-12 19:00:02.487921] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.576 [2024-07-12 19:00:02.487927] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.576 [2024-07-12 19:00:02.487932] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.576 [2024-07-12 19:00:02.488050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.576 [2024-07-12 19:00:02.488166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.576 [2024-07-12 19:00:02.488194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.576 [2024-07-12 19:00:02.488195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.835 [2024-07-12 19:00:03.191067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.835 [2024-07-12 19:00:03.242982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:00.835 19:00:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:04.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:17.308 rmmod nvme_tcp 00:08:17.308 rmmod nvme_fabrics 00:08:17.308 rmmod nvme_keyring 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 165565 ']' 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 165565 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 165565 ']' 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 165565 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 165565 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 165565' 00:08:17.308 killing process with pid 165565 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 165565 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 165565 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.308 19:00:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.847 19:00:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:19.847 00:08:19.847 real 0m25.546s 00:08:19.847 user 1m10.703s 00:08:19.847 sys 0m5.498s 00:08:19.847 19:00:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.847 19:00:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.847 ************************************ 00:08:19.847 END TEST nvmf_connect_disconnect 00:08:19.847 ************************************ 00:08:19.847 19:00:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:19.847 19:00:21 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:19.847 19:00:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:19.847 19:00:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.847 19:00:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:19.847 ************************************ 00:08:19.847 START TEST nvmf_multitarget 00:08:19.847 ************************************ 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:19.847 * Looking for test storage... 00:08:19.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.847 19:00:22 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.848 19:00:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:25.122 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:25.122 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:25.122 Found net devices under 0000:86:00.0: cvl_0_0 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.122 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:25.123 Found net devices under 0000:86:00.1: cvl_0_1 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.123 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:08:25.383 00:08:25.383 --- 10.0.0.2 ping statistics --- 00:08:25.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.383 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:08:25.383 00:08:25.383 --- 10.0.0.1 ping statistics --- 00:08:25.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.383 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=172353 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 172353 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 172353 ']' 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.383 19:00:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:25.383 [2024-07-12 19:00:27.950538] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:08:25.383 [2024-07-12 19:00:27.950581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.642 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.642 [2024-07-12 19:00:28.022313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.642 [2024-07-12 19:00:28.101845] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.642 [2024-07-12 19:00:28.101882] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.642 [2024-07-12 19:00:28.101889] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.642 [2024-07-12 19:00:28.101895] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.642 [2024-07-12 19:00:28.101900] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.642 [2024-07-12 19:00:28.101978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.642 [2024-07-12 19:00:28.102105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.642 [2024-07-12 19:00:28.102214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.642 [2024-07-12 19:00:28.102215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.211 19:00:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.211 19:00:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:26.211 19:00:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.211 19:00:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.211 19:00:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:26.471 19:00:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.471 19:00:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:26.471 19:00:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:26.471 19:00:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:26.471 19:00:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:26.471 19:00:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:26.471 "nvmf_tgt_1" 00:08:26.471 19:00:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:26.728 "nvmf_tgt_2" 00:08:26.728 19:00:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:26.728 19:00:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:26.728 19:00:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:26.728 19:00:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:26.728 true 00:08:26.987 19:00:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:26.987 true 00:08:26.987 19:00:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:26.987 19:00:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:26.987 19:00:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:26.987 19:00:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:26.987 19:00:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:26.987 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.987 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:26.987 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.987 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:26.987 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.987 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.987 rmmod nvme_tcp 00:08:26.987 rmmod nvme_fabrics 00:08:26.987 rmmod nvme_keyring 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 172353 ']' 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 172353 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 172353 ']' 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 172353 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172353 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172353' 00:08:27.256 killing process with pid 172353 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 172353 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 172353 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.256 19:00:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.800 19:00:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.800 00:08:29.800 real 0m9.873s 00:08:29.800 user 0m9.161s 00:08:29.800 sys 0m4.776s 00:08:29.800 19:00:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.800 19:00:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:29.800 ************************************ 00:08:29.800 END TEST nvmf_multitarget 00:08:29.800 ************************************ 00:08:29.800 19:00:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:29.800 19:00:31 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:29.800 19:00:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.800 19:00:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.800 19:00:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.800 ************************************ 00:08:29.800 START TEST nvmf_rpc 00:08:29.800 ************************************ 00:08:29.800 19:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:29.800 * Looking for test storage... 00:08:29.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.800 19:00:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.801 19:00:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:35.082 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:35.083 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:35.083 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:35.083 Found net devices under 0000:86:00.0: cvl_0_0 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:35.083 Found net devices under 0000:86:00.1: cvl_0_1 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.083 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:35.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:08:35.342 00:08:35.342 --- 10.0.0.2 ping statistics --- 00:08:35.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.342 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:08:35.342 00:08:35.342 --- 10.0.0.1 ping statistics --- 00:08:35.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.342 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=176142 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 176142 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 176142 ']' 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.342 19:00:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.342 [2024-07-12 19:00:37.884672] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:08:35.342 [2024-07-12 19:00:37.884713] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.342 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.601 [2024-07-12 19:00:37.938523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.601 [2024-07-12 19:00:38.017273] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.601 [2024-07-12 19:00:38.017311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.601 [2024-07-12 19:00:38.017319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.601 [2024-07-12 19:00:38.017325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.601 [2024-07-12 19:00:38.017330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.601 [2024-07-12 19:00:38.021245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.601 [2024-07-12 19:00:38.021279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.601 [2024-07-12 19:00:38.021383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.601 [2024-07-12 19:00:38.021384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.171 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.171 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:36.171 19:00:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:36.172 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:36.172 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:36.432 "tick_rate": 2300000000, 00:08:36.432 "poll_groups": [ 00:08:36.432 { 00:08:36.432 "name": "nvmf_tgt_poll_group_000", 00:08:36.432 "admin_qpairs": 0, 00:08:36.432 "io_qpairs": 0, 00:08:36.432 "current_admin_qpairs": 0, 00:08:36.432 "current_io_qpairs": 0, 00:08:36.432 "pending_bdev_io": 0, 00:08:36.432 "completed_nvme_io": 0, 00:08:36.432 "transports": [] 00:08:36.432 }, 00:08:36.432 { 00:08:36.432 "name": "nvmf_tgt_poll_group_001", 00:08:36.432 "admin_qpairs": 0, 00:08:36.432 "io_qpairs": 0, 00:08:36.432 "current_admin_qpairs": 0, 00:08:36.432 "current_io_qpairs": 0, 00:08:36.432 "pending_bdev_io": 0, 00:08:36.432 "completed_nvme_io": 0, 00:08:36.432 "transports": [] 00:08:36.432 }, 00:08:36.432 { 00:08:36.432 "name": "nvmf_tgt_poll_group_002", 00:08:36.432 "admin_qpairs": 0, 00:08:36.432 "io_qpairs": 0, 00:08:36.432 "current_admin_qpairs": 0, 00:08:36.432 "current_io_qpairs": 0, 00:08:36.432 "pending_bdev_io": 0, 00:08:36.432 "completed_nvme_io": 0, 00:08:36.432 "transports": [] 00:08:36.432 }, 00:08:36.432 { 00:08:36.432 "name": "nvmf_tgt_poll_group_003", 00:08:36.432 "admin_qpairs": 0, 00:08:36.432 "io_qpairs": 0, 00:08:36.432 "current_admin_qpairs": 0, 00:08:36.432 "current_io_qpairs": 0, 00:08:36.432 "pending_bdev_io": 0, 00:08:36.432 "completed_nvme_io": 0, 00:08:36.432 "transports": [] 00:08:36.432 } 00:08:36.432 ] 00:08:36.432 }' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 [2024-07-12 19:00:38.874544] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:36.432 "tick_rate": 2300000000, 00:08:36.432 "poll_groups": [ 00:08:36.432 { 00:08:36.432 "name": "nvmf_tgt_poll_group_000", 00:08:36.432 "admin_qpairs": 0, 00:08:36.432 "io_qpairs": 0, 00:08:36.432 "current_admin_qpairs": 0, 00:08:36.432 "current_io_qpairs": 0, 00:08:36.432 "pending_bdev_io": 0, 00:08:36.432 "completed_nvme_io": 0, 00:08:36.432 "transports": [ 00:08:36.432 { 00:08:36.432 "trtype": "TCP" 00:08:36.432 } 00:08:36.432 ] 00:08:36.432 }, 00:08:36.432 { 00:08:36.432 "name": "nvmf_tgt_poll_group_001", 00:08:36.432 "admin_qpairs": 0, 00:08:36.432 "io_qpairs": 0, 00:08:36.432 "current_admin_qpairs": 0, 00:08:36.432 "current_io_qpairs": 0, 00:08:36.432 "pending_bdev_io": 0, 00:08:36.432 "completed_nvme_io": 0, 00:08:36.432 "transports": [ 00:08:36.432 { 00:08:36.432 "trtype": "TCP" 00:08:36.432 } 00:08:36.432 ] 00:08:36.432 }, 00:08:36.432 { 00:08:36.432 "name": "nvmf_tgt_poll_group_002", 00:08:36.432 "admin_qpairs": 0, 00:08:36.432 "io_qpairs": 0, 00:08:36.432 "current_admin_qpairs": 0, 00:08:36.432 "current_io_qpairs": 0, 00:08:36.432 "pending_bdev_io": 0, 00:08:36.432 "completed_nvme_io": 0, 00:08:36.432 "transports": [ 00:08:36.432 { 00:08:36.432 "trtype": "TCP" 00:08:36.432 } 00:08:36.432 ] 00:08:36.432 }, 00:08:36.432 { 00:08:36.432 "name": "nvmf_tgt_poll_group_003", 00:08:36.432 "admin_qpairs": 0, 00:08:36.432 "io_qpairs": 0, 00:08:36.432 "current_admin_qpairs": 0, 00:08:36.432 "current_io_qpairs": 0, 00:08:36.432 "pending_bdev_io": 0, 00:08:36.432 "completed_nvme_io": 0, 00:08:36.432 "transports": [ 00:08:36.432 { 00:08:36.432 "trtype": "TCP" 00:08:36.432 } 00:08:36.432 ] 00:08:36.432 } 00:08:36.432 ] 00:08:36.432 }' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.432 19:00:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.692 Malloc1 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.692 [2024-07-12 19:00:39.042474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:36.692 [2024-07-12 19:00:39.071025] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:36.692 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:36.692 could not add new controller: failed to write to nvme-fabrics device 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.692 19:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:38.073 19:00:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:38.073 19:00:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:38.073 19:00:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:38.073 19:00:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:38.073 19:00:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:39.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:39.980 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.981 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:39.981 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.981 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:39.981 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:39.981 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:39.981 [2024-07-12 19:00:42.525829] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:40.241 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:40.241 could not add new controller: failed to write to nvme-fabrics device 00:08:40.241 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:40.241 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:40.241 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:40.241 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:40.241 19:00:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:40.241 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.241 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.241 19:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.241 19:00:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:41.178 19:00:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:41.178 19:00:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:41.178 19:00:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:41.178 19:00:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:41.178 19:00:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:43.081 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:43.081 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:43.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.341 [2024-07-12 19:00:45.828375] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.341 19:00:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:44.722 19:00:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:44.722 19:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:44.722 19:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:44.722 19:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:44.722 19:00:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:46.629 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:46.629 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:46.629 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:46.629 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:46.629 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:46.629 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:46.629 19:00:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:46.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.629 19:00:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:46.629 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.630 [2024-07-12 19:00:49.156369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.630 19:00:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:48.010 19:00:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:48.010 19:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:48.010 19:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:48.010 19:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:48.010 19:00:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:49.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 [2024-07-12 19:00:52.397210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.918 19:00:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:51.295 19:00:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:51.295 19:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:51.295 19:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:51.295 19:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:51.295 19:00:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:53.200 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:53.200 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:53.200 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:53.200 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:53.200 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:53.200 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:53.200 19:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.200 19:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:53.200 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:53.200 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:53.200 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.201 [2024-07-12 19:00:55.718807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.201 19:00:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:54.581 19:00:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.581 19:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:54.581 19:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.581 19:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:54.581 19:00:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:56.490 19:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:56.490 19:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:56.490 19:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.490 19:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:56.490 19:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.490 19:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:56.490 19:00:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:56.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.490 19:00:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:56.490 19:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:56.490 19:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:56.490 19:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.490 19:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:56.490 19:00:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.490 [2024-07-12 19:00:59.051411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.490 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.750 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.750 19:00:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:56.750 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.750 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.750 19:00:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.750 19:00:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:57.690 19:01:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:57.690 19:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:57.690 19:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:57.690 19:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:57.690 19:01:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:59.599 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:59.599 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:59.599 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:59.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.859 [2024-07-12 19:01:02.299483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.859 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.860 [2024-07-12 19:01:02.347586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.860 [2024-07-12 19:01:02.399783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.860 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 [2024-07-12 19:01:02.447954] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 [2024-07-12 19:01:02.496100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:00.118 "tick_rate": 2300000000, 00:09:00.118 "poll_groups": [ 00:09:00.118 { 00:09:00.118 "name": "nvmf_tgt_poll_group_000", 00:09:00.118 "admin_qpairs": 2, 00:09:00.118 "io_qpairs": 168, 00:09:00.118 "current_admin_qpairs": 0, 00:09:00.118 "current_io_qpairs": 0, 00:09:00.118 "pending_bdev_io": 0, 00:09:00.118 "completed_nvme_io": 329, 00:09:00.118 "transports": [ 00:09:00.118 { 00:09:00.118 "trtype": "TCP" 00:09:00.118 } 00:09:00.118 ] 00:09:00.118 }, 00:09:00.118 { 00:09:00.118 "name": "nvmf_tgt_poll_group_001", 00:09:00.118 "admin_qpairs": 2, 00:09:00.118 "io_qpairs": 168, 00:09:00.118 "current_admin_qpairs": 0, 00:09:00.118 "current_io_qpairs": 0, 00:09:00.118 "pending_bdev_io": 0, 00:09:00.118 "completed_nvme_io": 191, 00:09:00.118 "transports": [ 00:09:00.118 { 00:09:00.118 "trtype": "TCP" 00:09:00.118 } 00:09:00.118 ] 00:09:00.118 }, 00:09:00.118 { 00:09:00.118 "name": "nvmf_tgt_poll_group_002", 00:09:00.118 "admin_qpairs": 1, 00:09:00.118 "io_qpairs": 168, 00:09:00.118 "current_admin_qpairs": 0, 00:09:00.118 "current_io_qpairs": 0, 00:09:00.118 "pending_bdev_io": 0, 00:09:00.118 "completed_nvme_io": 236, 00:09:00.118 "transports": [ 00:09:00.118 { 00:09:00.118 "trtype": "TCP" 00:09:00.118 } 00:09:00.118 ] 00:09:00.118 }, 00:09:00.118 { 00:09:00.118 "name": "nvmf_tgt_poll_group_003", 00:09:00.118 "admin_qpairs": 2, 00:09:00.118 "io_qpairs": 168, 00:09:00.118 "current_admin_qpairs": 0, 00:09:00.118 "current_io_qpairs": 0, 00:09:00.118 "pending_bdev_io": 0, 00:09:00.118 "completed_nvme_io": 266, 00:09:00.118 "transports": [ 00:09:00.118 { 00:09:00.118 "trtype": "TCP" 00:09:00.118 } 00:09:00.118 ] 00:09:00.118 } 00:09:00.118 ] 00:09:00.118 }' 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:00.118 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:00.118 rmmod nvme_tcp 00:09:00.118 rmmod nvme_fabrics 00:09:00.118 rmmod nvme_keyring 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 176142 ']' 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 176142 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 176142 ']' 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 176142 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 176142 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 176142' 00:09:00.377 killing process with pid 176142 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 176142 00:09:00.377 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 176142 00:09:00.637 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:00.637 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:00.637 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:00.637 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.637 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:00.637 19:01:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.637 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.637 19:01:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.546 19:01:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:02.546 00:09:02.546 real 0m33.085s 00:09:02.546 user 1m41.169s 00:09:02.546 sys 0m6.086s 00:09:02.546 19:01:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.546 19:01:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.546 ************************************ 00:09:02.546 END TEST nvmf_rpc 00:09:02.546 ************************************ 00:09:02.546 19:01:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:02.546 19:01:05 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:02.546 19:01:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:02.546 19:01:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.546 19:01:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:02.546 ************************************ 00:09:02.546 START TEST nvmf_invalid 00:09:02.546 ************************************ 00:09:02.546 19:01:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:02.806 * Looking for test storage... 00:09:02.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.806 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:02.807 19:01:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:09.386 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:09.386 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:09.386 Found net devices under 0000:86:00.0: cvl_0_0 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:09.386 Found net devices under 0000:86:00.1: cvl_0_1 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.386 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:09.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:09:09.387 00:09:09.387 --- 10.0.0.2 ping statistics --- 00:09:09.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.387 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:09:09.387 00:09:09.387 --- 10.0.0.1 ping statistics --- 00:09:09.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.387 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.387 19:01:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=183957 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 183957 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 183957 ']' 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:09.387 [2024-07-12 19:01:11.070499] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:09:09.387 [2024-07-12 19:01:11.070543] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.387 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.387 [2024-07-12 19:01:11.142249] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.387 [2024-07-12 19:01:11.222896] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.387 [2024-07-12 19:01:11.222931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.387 [2024-07-12 19:01:11.222939] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.387 [2024-07-12 19:01:11.222945] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.387 [2024-07-12 19:01:11.222950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.387 [2024-07-12 19:01:11.222995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.387 [2024-07-12 19:01:11.223103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.387 [2024-07-12 19:01:11.223210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.387 [2024-07-12 19:01:11.223211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:09.387 19:01:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5183 00:09:09.647 [2024-07-12 19:01:12.081636] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:09.647 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:09.647 { 00:09:09.647 "nqn": "nqn.2016-06.io.spdk:cnode5183", 00:09:09.647 "tgt_name": "foobar", 00:09:09.647 "method": "nvmf_create_subsystem", 00:09:09.647 "req_id": 1 00:09:09.647 } 00:09:09.647 Got JSON-RPC error response 00:09:09.647 response: 00:09:09.647 { 00:09:09.647 "code": -32603, 00:09:09.647 "message": "Unable to find target foobar" 00:09:09.647 }' 00:09:09.647 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:09.647 { 00:09:09.647 "nqn": "nqn.2016-06.io.spdk:cnode5183", 00:09:09.647 "tgt_name": "foobar", 00:09:09.647 "method": "nvmf_create_subsystem", 00:09:09.647 "req_id": 1 00:09:09.647 } 00:09:09.647 Got JSON-RPC error response 00:09:09.647 response: 00:09:09.647 { 00:09:09.647 "code": -32603, 00:09:09.647 "message": "Unable to find target foobar" 00:09:09.647 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:09.647 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:09.647 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11367 00:09:09.907 [2024-07-12 19:01:12.282352] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11367: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:09.907 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:09.907 { 00:09:09.907 "nqn": "nqn.2016-06.io.spdk:cnode11367", 00:09:09.907 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:09.907 "method": "nvmf_create_subsystem", 00:09:09.907 "req_id": 1 00:09:09.907 } 00:09:09.907 Got JSON-RPC error response 00:09:09.907 response: 00:09:09.907 { 00:09:09.907 "code": -32602, 00:09:09.907 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:09.907 }' 00:09:09.907 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:09.907 { 00:09:09.907 "nqn": "nqn.2016-06.io.spdk:cnode11367", 00:09:09.907 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:09.907 "method": "nvmf_create_subsystem", 00:09:09.907 "req_id": 1 00:09:09.907 } 00:09:09.907 Got JSON-RPC error response 00:09:09.907 response: 00:09:09.907 { 00:09:09.907 "code": -32602, 00:09:09.907 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:09.907 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:09.907 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:09.907 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6381 00:09:09.907 [2024-07-12 19:01:12.466947] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6381: invalid model number 'SPDK_Controller' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:10.168 { 00:09:10.168 "nqn": "nqn.2016-06.io.spdk:cnode6381", 00:09:10.168 "model_number": "SPDK_Controller\u001f", 00:09:10.168 "method": "nvmf_create_subsystem", 00:09:10.168 "req_id": 1 00:09:10.168 } 00:09:10.168 Got JSON-RPC error response 00:09:10.168 response: 00:09:10.168 { 00:09:10.168 "code": -32602, 00:09:10.168 "message": "Invalid MN SPDK_Controller\u001f" 00:09:10.168 }' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:10.168 { 00:09:10.168 "nqn": "nqn.2016-06.io.spdk:cnode6381", 00:09:10.168 "model_number": "SPDK_Controller\u001f", 00:09:10.168 "method": "nvmf_create_subsystem", 00:09:10.168 "req_id": 1 00:09:10.168 } 00:09:10.168 Got JSON-RPC error response 00:09:10.168 response: 00:09:10.168 { 00:09:10.168 "code": -32602, 00:09:10.168 "message": "Invalid MN SPDK_Controller\u001f" 00:09:10.168 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.168 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ g == \- ]] 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'g'\''b\GRM4adJHDQ SPM,ci' 00:09:10.169 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'g'\''b\GRM4adJHDQ SPM,ci' nqn.2016-06.io.spdk:cnode18427 00:09:10.430 [2024-07-12 19:01:12.788060] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18427: invalid serial number 'g'b\GRM4adJHDQ SPM,ci' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:10.430 { 00:09:10.430 "nqn": "nqn.2016-06.io.spdk:cnode18427", 00:09:10.430 "serial_number": "g'\''b\\GRM4adJHDQ SPM,ci", 00:09:10.430 "method": "nvmf_create_subsystem", 00:09:10.430 "req_id": 1 00:09:10.430 } 00:09:10.430 Got JSON-RPC error response 00:09:10.430 response: 00:09:10.430 { 00:09:10.430 "code": -32602, 00:09:10.430 "message": "Invalid SN g'\''b\\GRM4adJHDQ SPM,ci" 00:09:10.430 }' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:10.430 { 00:09:10.430 "nqn": "nqn.2016-06.io.spdk:cnode18427", 00:09:10.430 "serial_number": "g'b\\GRM4adJHDQ SPM,ci", 00:09:10.430 "method": "nvmf_create_subsystem", 00:09:10.430 "req_id": 1 00:09:10.430 } 00:09:10.430 Got JSON-RPC error response 00:09:10.430 response: 00:09:10.430 { 00:09:10.430 "code": -32602, 00:09:10.430 "message": "Invalid SN g'b\\GRM4adJHDQ SPM,ci" 00:09:10.430 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.430 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.431 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.691 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:10.691 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:10.691 19:01:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ i == \- ]] 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'iT!ZXsw/F#EE4v-J*+VjrQ,XS9I30|]Vaa\"%M9`Q' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'iT!ZXsw/F#EE4v-J*+VjrQ,XS9I30|]Vaa\"%M9`Q' nqn.2016-06.io.spdk:cnode8202 00:09:10.691 [2024-07-12 19:01:13.221514] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8202: invalid model number 'iT!ZXsw/F#EE4v-J*+VjrQ,XS9I30|]Vaa\"%M9`Q' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:10.691 { 00:09:10.691 "nqn": "nqn.2016-06.io.spdk:cnode8202", 00:09:10.691 "model_number": "iT!ZXsw/F#EE4v-J*+VjrQ,XS9I30|]Vaa\\\"%M9`Q", 00:09:10.691 "method": "nvmf_create_subsystem", 00:09:10.691 "req_id": 1 00:09:10.691 } 00:09:10.691 Got JSON-RPC error response 00:09:10.691 response: 00:09:10.691 { 00:09:10.691 "code": -32602, 00:09:10.691 "message": "Invalid MN iT!ZXsw/F#EE4v-J*+VjrQ,XS9I30|]Vaa\\\"%M9`Q" 00:09:10.691 }' 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:10.691 { 00:09:10.691 "nqn": "nqn.2016-06.io.spdk:cnode8202", 00:09:10.691 "model_number": "iT!ZXsw/F#EE4v-J*+VjrQ,XS9I30|]Vaa\\\"%M9`Q", 00:09:10.691 "method": "nvmf_create_subsystem", 00:09:10.691 "req_id": 1 00:09:10.691 } 00:09:10.691 Got JSON-RPC error response 00:09:10.691 response: 00:09:10.691 { 00:09:10.691 "code": -32602, 00:09:10.691 "message": "Invalid MN iT!ZXsw/F#EE4v-J*+VjrQ,XS9I30|]Vaa\\\"%M9`Q" 00:09:10.691 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:10.691 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:10.950 [2024-07-12 19:01:13.414244] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.950 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:11.209 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:11.209 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:11.209 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:11.209 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:11.209 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:11.469 [2024-07-12 19:01:13.815593] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:11.469 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:11.469 { 00:09:11.469 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:11.469 "listen_address": { 00:09:11.469 "trtype": "tcp", 00:09:11.469 "traddr": "", 00:09:11.469 "trsvcid": "4421" 00:09:11.469 }, 00:09:11.469 "method": "nvmf_subsystem_remove_listener", 00:09:11.469 "req_id": 1 00:09:11.469 } 00:09:11.469 Got JSON-RPC error response 00:09:11.469 response: 00:09:11.469 { 00:09:11.469 "code": -32602, 00:09:11.469 "message": "Invalid parameters" 00:09:11.469 }' 00:09:11.469 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:11.469 { 00:09:11.469 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:11.469 "listen_address": { 00:09:11.469 "trtype": "tcp", 00:09:11.469 "traddr": "", 00:09:11.469 "trsvcid": "4421" 00:09:11.469 }, 00:09:11.469 "method": "nvmf_subsystem_remove_listener", 00:09:11.469 "req_id": 1 00:09:11.469 } 00:09:11.469 Got JSON-RPC error response 00:09:11.469 response: 00:09:11.469 { 00:09:11.469 "code": -32602, 00:09:11.469 "message": "Invalid parameters" 00:09:11.469 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:11.469 19:01:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3900 -i 0 00:09:11.469 [2024-07-12 19:01:14.000228] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3900: invalid cntlid range [0-65519] 00:09:11.469 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:11.469 { 00:09:11.469 "nqn": "nqn.2016-06.io.spdk:cnode3900", 00:09:11.469 "min_cntlid": 0, 00:09:11.469 "method": "nvmf_create_subsystem", 00:09:11.469 "req_id": 1 00:09:11.469 } 00:09:11.469 Got JSON-RPC error response 00:09:11.469 response: 00:09:11.469 { 00:09:11.469 "code": -32602, 00:09:11.469 "message": "Invalid cntlid range [0-65519]" 00:09:11.469 }' 00:09:11.469 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:11.469 { 00:09:11.470 "nqn": "nqn.2016-06.io.spdk:cnode3900", 00:09:11.470 "min_cntlid": 0, 00:09:11.470 "method": "nvmf_create_subsystem", 00:09:11.470 "req_id": 1 00:09:11.470 } 00:09:11.470 Got JSON-RPC error response 00:09:11.470 response: 00:09:11.470 { 00:09:11.470 "code": -32602, 00:09:11.470 "message": "Invalid cntlid range [0-65519]" 00:09:11.470 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:11.470 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22728 -i 65520 00:09:11.729 [2024-07-12 19:01:14.176813] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22728: invalid cntlid range [65520-65519] 00:09:11.729 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:11.729 { 00:09:11.729 "nqn": "nqn.2016-06.io.spdk:cnode22728", 00:09:11.729 "min_cntlid": 65520, 00:09:11.729 "method": "nvmf_create_subsystem", 00:09:11.729 "req_id": 1 00:09:11.729 } 00:09:11.729 Got JSON-RPC error response 00:09:11.729 response: 00:09:11.729 { 00:09:11.729 "code": -32602, 00:09:11.729 "message": "Invalid cntlid range [65520-65519]" 00:09:11.729 }' 00:09:11.729 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:11.729 { 00:09:11.729 "nqn": "nqn.2016-06.io.spdk:cnode22728", 00:09:11.729 "min_cntlid": 65520, 00:09:11.729 "method": "nvmf_create_subsystem", 00:09:11.729 "req_id": 1 00:09:11.729 } 00:09:11.729 Got JSON-RPC error response 00:09:11.729 response: 00:09:11.729 { 00:09:11.729 "code": -32602, 00:09:11.729 "message": "Invalid cntlid range [65520-65519]" 00:09:11.729 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:11.729 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18502 -I 0 00:09:11.989 [2024-07-12 19:01:14.369506] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18502: invalid cntlid range [1-0] 00:09:11.989 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:11.989 { 00:09:11.989 "nqn": "nqn.2016-06.io.spdk:cnode18502", 00:09:11.989 "max_cntlid": 0, 00:09:11.989 "method": "nvmf_create_subsystem", 00:09:11.989 "req_id": 1 00:09:11.989 } 00:09:11.989 Got JSON-RPC error response 00:09:11.989 response: 00:09:11.989 { 00:09:11.989 "code": -32602, 00:09:11.989 "message": "Invalid cntlid range [1-0]" 00:09:11.989 }' 00:09:11.989 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:11.989 { 00:09:11.989 "nqn": "nqn.2016-06.io.spdk:cnode18502", 00:09:11.989 "max_cntlid": 0, 00:09:11.989 "method": "nvmf_create_subsystem", 00:09:11.989 "req_id": 1 00:09:11.989 } 00:09:11.989 Got JSON-RPC error response 00:09:11.989 response: 00:09:11.989 { 00:09:11.989 "code": -32602, 00:09:11.989 "message": "Invalid cntlid range [1-0]" 00:09:11.989 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:11.989 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21257 -I 65520 00:09:12.249 [2024-07-12 19:01:14.558130] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21257: invalid cntlid range [1-65520] 00:09:12.249 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:12.249 { 00:09:12.249 "nqn": "nqn.2016-06.io.spdk:cnode21257", 00:09:12.249 "max_cntlid": 65520, 00:09:12.249 "method": "nvmf_create_subsystem", 00:09:12.249 "req_id": 1 00:09:12.249 } 00:09:12.249 Got JSON-RPC error response 00:09:12.249 response: 00:09:12.249 { 00:09:12.249 "code": -32602, 00:09:12.249 "message": "Invalid cntlid range [1-65520]" 00:09:12.249 }' 00:09:12.249 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:12.249 { 00:09:12.249 "nqn": "nqn.2016-06.io.spdk:cnode21257", 00:09:12.249 "max_cntlid": 65520, 00:09:12.249 "method": "nvmf_create_subsystem", 00:09:12.249 "req_id": 1 00:09:12.249 } 00:09:12.249 Got JSON-RPC error response 00:09:12.249 response: 00:09:12.249 { 00:09:12.249 "code": -32602, 00:09:12.249 "message": "Invalid cntlid range [1-65520]" 00:09:12.249 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:12.249 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25593 -i 6 -I 5 00:09:12.249 [2024-07-12 19:01:14.738748] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25593: invalid cntlid range [6-5] 00:09:12.249 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:12.249 { 00:09:12.249 "nqn": "nqn.2016-06.io.spdk:cnode25593", 00:09:12.249 "min_cntlid": 6, 00:09:12.249 "max_cntlid": 5, 00:09:12.249 "method": "nvmf_create_subsystem", 00:09:12.249 "req_id": 1 00:09:12.249 } 00:09:12.249 Got JSON-RPC error response 00:09:12.249 response: 00:09:12.249 { 00:09:12.249 "code": -32602, 00:09:12.249 "message": "Invalid cntlid range [6-5]" 00:09:12.249 }' 00:09:12.249 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:12.249 { 00:09:12.249 "nqn": "nqn.2016-06.io.spdk:cnode25593", 00:09:12.249 "min_cntlid": 6, 00:09:12.249 "max_cntlid": 5, 00:09:12.249 "method": "nvmf_create_subsystem", 00:09:12.249 "req_id": 1 00:09:12.249 } 00:09:12.249 Got JSON-RPC error response 00:09:12.249 response: 00:09:12.249 { 00:09:12.249 "code": -32602, 00:09:12.249 "message": "Invalid cntlid range [6-5]" 00:09:12.249 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:12.249 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:12.509 { 00:09:12.509 "name": "foobar", 00:09:12.509 "method": "nvmf_delete_target", 00:09:12.509 "req_id": 1 00:09:12.509 } 00:09:12.509 Got JSON-RPC error response 00:09:12.509 response: 00:09:12.509 { 00:09:12.509 "code": -32602, 00:09:12.509 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:12.509 }' 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:12.509 { 00:09:12.509 "name": "foobar", 00:09:12.509 "method": "nvmf_delete_target", 00:09:12.509 "req_id": 1 00:09:12.509 } 00:09:12.509 Got JSON-RPC error response 00:09:12.509 response: 00:09:12.509 { 00:09:12.509 "code": -32602, 00:09:12.509 "message": "The specified target doesn't exist, cannot delete it." 00:09:12.509 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:12.509 rmmod nvme_tcp 00:09:12.509 rmmod nvme_fabrics 00:09:12.509 rmmod nvme_keyring 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 183957 ']' 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 183957 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 183957 ']' 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 183957 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.509 19:01:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 183957 00:09:12.509 19:01:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:12.509 19:01:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:12.509 19:01:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 183957' 00:09:12.509 killing process with pid 183957 00:09:12.509 19:01:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 183957 00:09:12.509 19:01:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 183957 00:09:12.769 19:01:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:12.769 19:01:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:12.769 19:01:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:12.769 19:01:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.769 19:01:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:12.769 19:01:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.769 19:01:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.769 19:01:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.310 19:01:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:15.310 00:09:15.310 real 0m12.152s 00:09:15.310 user 0m19.733s 00:09:15.310 sys 0m5.333s 00:09:15.310 19:01:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.310 19:01:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:15.310 ************************************ 00:09:15.310 END TEST nvmf_invalid 00:09:15.310 ************************************ 00:09:15.310 19:01:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:15.310 19:01:17 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:15.310 19:01:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:15.310 19:01:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.310 19:01:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:15.310 ************************************ 00:09:15.310 START TEST nvmf_abort 00:09:15.310 ************************************ 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:15.310 * Looking for test storage... 00:09:15.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:15.310 19:01:17 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:15.311 19:01:17 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:15.311 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:15.311 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.311 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:15.311 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:15.311 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:15.311 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.311 19:01:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.311 19:01:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.311 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:15.311 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:15.311 19:01:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:15.311 19:01:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:20.589 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:20.589 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:20.589 Found net devices under 0000:86:00.0: cvl_0_0 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:20.589 Found net devices under 0000:86:00.1: cvl_0_1 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.589 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.590 19:01:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.590 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.590 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.590 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:20.590 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.590 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:20.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:09:20.849 00:09:20.849 --- 10.0.0.2 ping statistics --- 00:09:20.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.849 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:09:20.849 00:09:20.849 --- 10.0.0.1 ping statistics --- 00:09:20.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.849 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=188184 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 188184 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 188184 ']' 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.849 19:01:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.849 [2024-07-12 19:01:23.270811] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:09:20.849 [2024-07-12 19:01:23.270852] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.849 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.849 [2024-07-12 19:01:23.340557] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.849 [2024-07-12 19:01:23.414152] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.849 [2024-07-12 19:01:23.414193] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.849 [2024-07-12 19:01:23.414200] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.849 [2024-07-12 19:01:23.414207] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.849 [2024-07-12 19:01:23.414212] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.849 [2024-07-12 19:01:23.414345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.849 [2024-07-12 19:01:23.414452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.849 [2024-07-12 19:01:23.414451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.789 [2024-07-12 19:01:24.115503] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.789 Malloc0 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.789 Delay0 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:21.789 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.790 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.790 [2024-07-12 19:01:24.179465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.790 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.790 19:01:24 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:21.790 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.790 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.790 19:01:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.790 19:01:24 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:21.790 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.790 [2024-07-12 19:01:24.299902] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:24.324 Initializing NVMe Controllers 00:09:24.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:24.324 controller IO queue size 128 less than required 00:09:24.324 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:24.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:24.325 Initialization complete. Launching workers. 00:09:24.325 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 44866 00:09:24.325 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 44927, failed to submit 62 00:09:24.325 success 44870, unsuccess 57, failed 0 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.325 rmmod nvme_tcp 00:09:24.325 rmmod nvme_fabrics 00:09:24.325 rmmod nvme_keyring 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 188184 ']' 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 188184 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 188184 ']' 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 188184 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 188184 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 188184' 00:09:24.325 killing process with pid 188184 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 188184 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 188184 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.325 19:01:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.863 19:01:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:26.863 00:09:26.863 real 0m11.515s 00:09:26.863 user 0m13.648s 00:09:26.863 sys 0m5.101s 00:09:26.863 19:01:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.863 19:01:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:26.863 ************************************ 00:09:26.863 END TEST nvmf_abort 00:09:26.863 ************************************ 00:09:26.863 19:01:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:26.863 19:01:28 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:26.863 19:01:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:26.863 19:01:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.863 19:01:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:26.863 ************************************ 00:09:26.863 START TEST nvmf_ns_hotplug_stress 00:09:26.863 ************************************ 00:09:26.863 19:01:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:26.863 * Looking for test storage... 00:09:26.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.863 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:26.864 19:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.144 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.144 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:32.144 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:32.144 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:32.144 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:32.144 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:32.144 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:32.144 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:32.144 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:32.144 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:32.145 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:32.145 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:32.145 Found net devices under 0000:86:00.0: cvl_0_0 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:32.145 Found net devices under 0000:86:00.1: cvl_0_1 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.145 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:32.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:09:32.405 00:09:32.405 --- 10.0.0.2 ping statistics --- 00:09:32.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.405 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:09:32.405 00:09:32.405 --- 10.0.0.1 ping statistics --- 00:09:32.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.405 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=192341 00:09:32.405 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 192341 00:09:32.406 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:32.406 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 192341 ']' 00:09:32.406 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.406 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.406 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.406 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.406 19:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.406 [2024-07-12 19:01:34.821982] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:09:32.406 [2024-07-12 19:01:34.822026] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.406 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.406 [2024-07-12 19:01:34.894632] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:32.666 [2024-07-12 19:01:34.974156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.666 [2024-07-12 19:01:34.974189] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.666 [2024-07-12 19:01:34.974196] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.666 [2024-07-12 19:01:34.974202] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.666 [2024-07-12 19:01:34.974207] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.666 [2024-07-12 19:01:34.974323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.666 [2024-07-12 19:01:34.974429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.666 [2024-07-12 19:01:34.974430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.236 19:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:33.236 19:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:33.236 19:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:33.236 19:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:33.236 19:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:33.236 19:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.236 19:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:33.236 19:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:33.496 [2024-07-12 19:01:35.831732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.496 19:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:33.496 19:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.756 [2024-07-12 19:01:36.213106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.756 19:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:34.016 19:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:34.276 Malloc0 00:09:34.276 19:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:34.276 Delay0 00:09:34.276 19:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.535 19:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:34.796 NULL1 00:09:34.796 19:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:34.796 19:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=192707 00:09:34.796 19:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:34.796 19:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:34.796 19:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.055 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.055 19:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.315 19:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:35.315 19:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:35.575 true 00:09:35.575 19:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:35.575 19:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.575 19:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.835 19:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:35.835 19:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:36.095 true 00:09:36.095 19:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:36.095 19:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.095 19:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.354 19:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:36.354 19:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:36.614 true 00:09:36.614 19:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:36.614 19:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.874 19:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.874 19:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:36.874 19:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:37.134 true 00:09:37.134 19:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:37.134 19:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.393 19:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.653 19:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:37.653 19:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:37.653 true 00:09:37.653 19:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:37.653 19:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.912 19:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.171 19:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:38.171 19:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:38.431 true 00:09:38.431 19:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:38.431 19:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.431 19:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.691 19:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:38.691 19:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:38.949 true 00:09:38.949 19:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:38.949 19:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.208 19:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.208 19:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:39.208 19:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:39.467 true 00:09:39.467 19:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:39.468 19:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.726 19:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.986 19:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:39.986 19:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:39.986 true 00:09:39.986 19:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:39.986 19:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.244 19:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.503 19:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:40.503 19:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:40.503 true 00:09:40.763 19:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:40.763 19:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.763 19:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.023 19:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:41.023 19:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:41.282 true 00:09:41.282 19:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:41.282 19:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.542 19:01:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.542 19:01:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:41.542 19:01:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:41.810 true 00:09:41.810 19:01:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:41.810 19:01:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.080 19:01:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.080 19:01:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:42.080 19:01:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:42.339 true 00:09:42.339 19:01:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:42.339 19:01:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.598 19:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.857 19:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:42.857 19:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:42.857 true 00:09:42.857 19:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:42.857 19:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.116 19:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.375 19:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:43.375 19:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:43.375 true 00:09:43.635 19:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:43.635 19:01:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.635 19:01:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.894 19:01:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:43.894 19:01:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:44.153 true 00:09:44.153 19:01:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:44.153 19:01:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.411 19:01:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.411 19:01:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:44.412 19:01:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:44.670 true 00:09:44.670 19:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:44.670 19:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.930 19:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.930 19:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:44.930 19:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:45.189 true 00:09:45.189 19:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:45.189 19:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.448 19:01:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.707 19:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:45.707 19:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:45.707 true 00:09:45.707 19:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:45.707 19:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.965 19:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.224 19:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:46.224 19:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:46.483 true 00:09:46.483 19:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:46.483 19:01:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.483 19:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.742 19:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:46.742 19:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:47.000 true 00:09:47.001 19:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:47.001 19:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.260 19:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.260 19:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:47.260 19:01:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:47.520 true 00:09:47.520 19:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:47.520 19:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.780 19:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.040 19:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:48.040 19:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:48.040 true 00:09:48.299 19:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:48.299 19:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.299 19:01:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.558 19:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:48.558 19:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:48.818 true 00:09:48.818 19:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:48.818 19:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.077 19:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.077 19:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:49.077 19:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:49.336 true 00:09:49.336 19:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:49.336 19:01:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.596 19:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.855 19:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:49.855 19:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:49.855 true 00:09:49.855 19:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:49.855 19:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.114 19:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.373 19:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:50.373 19:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:50.631 true 00:09:50.631 19:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:50.631 19:01:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.631 19:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.889 19:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:50.890 19:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:51.149 true 00:09:51.149 19:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:51.149 19:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.408 19:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.408 19:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:51.408 19:01:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:51.667 true 00:09:51.667 19:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:51.667 19:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.926 19:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.186 19:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:52.186 19:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:52.186 true 00:09:52.186 19:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:52.186 19:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.445 19:01:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.704 19:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:52.705 19:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:52.964 true 00:09:52.965 19:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:52.965 19:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.224 19:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.224 19:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:53.224 19:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:53.483 true 00:09:53.483 19:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:53.483 19:01:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.743 19:01:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.002 19:01:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:54.002 19:01:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:54.002 true 00:09:54.002 19:01:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:54.002 19:01:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.262 19:01:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.521 19:01:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:54.521 19:01:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:54.781 true 00:09:54.781 19:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:54.781 19:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.781 19:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.040 19:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:55.040 19:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:55.299 true 00:09:55.299 19:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:55.299 19:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.558 19:01:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.558 19:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:09:55.558 19:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:55.817 true 00:09:55.817 19:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:55.817 19:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.076 19:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.335 19:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:09:56.335 19:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:09:56.335 true 00:09:56.335 19:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:56.335 19:01:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.593 19:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.852 19:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:09:56.852 19:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:09:57.110 true 00:09:57.111 19:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:57.111 19:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.111 19:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.369 19:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:09:57.369 19:01:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:09:57.628 true 00:09:57.628 19:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:57.628 19:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.887 19:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.887 19:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:09:57.887 19:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:09:58.146 true 00:09:58.146 19:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:58.146 19:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.406 19:02:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.665 19:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:09:58.666 19:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:09:58.666 true 00:09:58.925 19:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:58.925 19:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.925 19:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.184 19:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:09:59.184 19:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:09:59.444 true 00:09:59.444 19:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:59.444 19:02:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.702 19:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.702 19:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:09:59.702 19:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:09:59.961 true 00:09:59.961 19:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:09:59.961 19:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.221 19:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.479 19:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:00.479 19:02:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:00.479 true 00:10:00.479 19:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:10:00.479 19:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.738 19:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.996 19:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:00.996 19:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:01.255 true 00:10:01.255 19:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:10:01.255 19:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.255 19:02:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.514 19:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:01.514 19:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:01.773 true 00:10:01.773 19:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:10:01.773 19:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.033 19:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.033 19:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:02.033 19:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:02.293 true 00:10:02.293 19:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:10:02.293 19:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.553 19:02:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.813 19:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:02.813 19:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:02.813 true 00:10:03.073 19:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:10:03.073 19:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.073 19:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.332 19:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:03.332 19:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:03.592 true 00:10:03.592 19:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:10:03.592 19:02:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.852 19:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.852 19:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:03.852 19:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:04.112 true 00:10:04.112 19:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:10:04.112 19:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.372 19:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.648 19:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:04.648 19:02:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:04.648 true 00:10:04.648 19:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:10:04.648 19:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.907 19:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.167 19:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:05.167 19:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:05.167 Initializing NVMe Controllers 00:10:05.167 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:05.167 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:10:05.167 Controller IO queue size 128, less than required. 00:10:05.167 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:05.167 WARNING: Some requested NVMe devices were skipped 00:10:05.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:05.167 Initialization complete. Launching workers. 00:10:05.167 ======================================================== 00:10:05.167 Latency(us) 00:10:05.167 Device Information : IOPS MiB/s Average min max 00:10:05.167 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27003.08 13.19 4740.19 1652.37 42761.18 00:10:05.167 ======================================================== 00:10:05.167 Total : 27003.08 13.19 4740.19 1652.37 42761.18 00:10:05.167 00:10:05.167 true 00:10:05.427 19:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 192707 00:10:05.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (192707) - No such process 00:10:05.427 19:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 192707 00:10:05.427 19:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.427 19:02:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.687 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:05.687 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:05.687 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:05.687 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.687 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:05.947 null0 00:10:05.947 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.947 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.947 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:05.947 null1 00:10:05.947 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.947 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.947 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:06.207 null2 00:10:06.207 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.207 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.207 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:06.467 null3 00:10:06.467 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.467 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.467 19:02:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:06.467 null4 00:10:06.467 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.467 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.467 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:06.726 null5 00:10:06.726 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.726 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.727 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:06.986 null6 00:10:06.986 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.986 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.986 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:06.986 null7 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.247 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 198274 198275 198277 198280 198281 198283 198285 198287 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.248 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.508 19:02:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.768 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:08.029 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.289 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.549 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.549 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.549 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:08.549 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.549 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.549 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.549 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:08.549 19:02:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.549 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.809 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.809 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.809 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:08.809 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.809 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.809 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.809 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.809 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.069 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.330 19:02:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.590 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.590 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.590 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.590 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.590 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.590 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.590 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.590 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.851 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.111 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:10.372 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:10.372 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.372 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.372 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:10.372 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:10.372 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.372 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.372 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.631 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.631 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.631 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:10.631 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.631 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.631 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:10.631 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.631 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.631 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.632 19:02:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:10.632 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:10.632 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.632 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.632 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:10.632 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:10.632 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.632 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.632 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:10.891 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:10.892 rmmod nvme_tcp 00:10:10.892 rmmod nvme_fabrics 00:10:10.892 rmmod nvme_keyring 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 192341 ']' 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 192341 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 192341 ']' 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 192341 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:10.892 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 192341 00:10:11.151 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:11.151 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:11.151 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 192341' 00:10:11.151 killing process with pid 192341 00:10:11.151 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 192341 00:10:11.151 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 192341 00:10:11.151 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.151 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:11.151 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:11.151 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.151 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:11.151 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.151 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.151 19:02:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.689 19:02:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:13.689 00:10:13.689 real 0m46.832s 00:10:13.689 user 3m18.181s 00:10:13.689 sys 0m16.580s 00:10:13.689 19:02:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:13.689 19:02:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.689 ************************************ 00:10:13.689 END TEST nvmf_ns_hotplug_stress 00:10:13.689 ************************************ 00:10:13.689 19:02:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:13.689 19:02:15 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:13.689 19:02:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:13.689 19:02:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.689 19:02:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:13.689 ************************************ 00:10:13.689 START TEST nvmf_connect_stress 00:10:13.689 ************************************ 00:10:13.689 19:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:13.689 * Looking for test storage... 00:10:13.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.689 19:02:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:13.690 19:02:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:18.968 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:18.968 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:18.968 Found net devices under 0000:86:00.0: cvl_0_0 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:18.968 Found net devices under 0000:86:00.1: cvl_0_1 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:18.968 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:19.228 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:19.228 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:19.228 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:19.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:10:19.228 00:10:19.228 --- 10.0.0.2 ping statistics --- 00:10:19.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.228 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:10:19.228 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:19.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:10:19.228 00:10:19.228 --- 10.0.0.1 ping statistics --- 00:10:19.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.228 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:19.228 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.228 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=202495 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 202495 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 202495 ']' 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:19.229 19:02:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.229 [2024-07-12 19:02:21.707379] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:10:19.229 [2024-07-12 19:02:21.707427] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.229 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.229 [2024-07-12 19:02:21.780070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:19.488 [2024-07-12 19:02:21.858575] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.488 [2024-07-12 19:02:21.858609] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.488 [2024-07-12 19:02:21.858616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.488 [2024-07-12 19:02:21.858622] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.488 [2024-07-12 19:02:21.858627] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.488 [2024-07-12 19:02:21.858759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.488 [2024-07-12 19:02:21.858863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.488 [2024-07-12 19:02:21.858864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.058 [2024-07-12 19:02:22.563514] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.058 [2024-07-12 19:02:22.594331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.058 NULL1 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=202679 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.058 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.318 19:02:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.576 19:02:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.576 19:02:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:20.576 19:02:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.576 19:02:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.576 19:02:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.834 19:02:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.834 19:02:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:20.834 19:02:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.834 19:02:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.834 19:02:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.404 19:02:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.404 19:02:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:21.404 19:02:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.404 19:02:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.404 19:02:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.664 19:02:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.664 19:02:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:21.664 19:02:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.664 19:02:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.664 19:02:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.924 19:02:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.924 19:02:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:21.924 19:02:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.924 19:02:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.924 19:02:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.184 19:02:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.184 19:02:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:22.184 19:02:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.184 19:02:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.184 19:02:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.443 19:02:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.443 19:02:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:22.443 19:02:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.443 19:02:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.443 19:02:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.012 19:02:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.012 19:02:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:23.012 19:02:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.012 19:02:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.012 19:02:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.272 19:02:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.272 19:02:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:23.272 19:02:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.272 19:02:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.272 19:02:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.531 19:02:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.531 19:02:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:23.531 19:02:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.532 19:02:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.532 19:02:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.791 19:02:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.791 19:02:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:23.791 19:02:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.791 19:02:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.791 19:02:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.053 19:02:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.053 19:02:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:24.053 19:02:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.053 19:02:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.053 19:02:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.620 19:02:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.620 19:02:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:24.620 19:02:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.620 19:02:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.620 19:02:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.879 19:02:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.879 19:02:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:24.879 19:02:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.879 19:02:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.879 19:02:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.137 19:02:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.137 19:02:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:25.137 19:02:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.137 19:02:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.137 19:02:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.396 19:02:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.396 19:02:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:25.396 19:02:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.396 19:02:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.396 19:02:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.655 19:02:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.655 19:02:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:25.655 19:02:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.655 19:02:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.655 19:02:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.224 19:02:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.224 19:02:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:26.224 19:02:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.224 19:02:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.224 19:02:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.483 19:02:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.483 19:02:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:26.483 19:02:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.483 19:02:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.483 19:02:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.742 19:02:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.742 19:02:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:26.742 19:02:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.742 19:02:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.742 19:02:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.000 19:02:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.000 19:02:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:27.000 19:02:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.000 19:02:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.000 19:02:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.569 19:02:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.569 19:02:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:27.569 19:02:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.569 19:02:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.569 19:02:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.829 19:02:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.829 19:02:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:27.829 19:02:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.829 19:02:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.829 19:02:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.088 19:02:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.088 19:02:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:28.088 19:02:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.088 19:02:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.088 19:02:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.348 19:02:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.348 19:02:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:28.348 19:02:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.348 19:02:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.348 19:02:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.607 19:02:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.607 19:02:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:28.607 19:02:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.607 19:02:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.607 19:02:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.176 19:02:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.176 19:02:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:29.176 19:02:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.176 19:02:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.176 19:02:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.436 19:02:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.436 19:02:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:29.436 19:02:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.436 19:02:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.436 19:02:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.696 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.696 19:02:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:29.696 19:02:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.696 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.696 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.956 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.956 19:02:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:29.956 19:02:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.956 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.956 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.216 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 202679 00:10:30.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (202679) - No such process 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 202679 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:30.477 rmmod nvme_tcp 00:10:30.477 rmmod nvme_fabrics 00:10:30.477 rmmod nvme_keyring 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 202495 ']' 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 202495 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 202495 ']' 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 202495 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 202495 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 202495' 00:10:30.477 killing process with pid 202495 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 202495 00:10:30.477 19:02:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 202495 00:10:30.737 19:02:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:30.738 19:02:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:30.738 19:02:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:30.738 19:02:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:30.738 19:02:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:30.738 19:02:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.738 19:02:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.738 19:02:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.653 19:02:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:32.653 00:10:32.653 real 0m19.328s 00:10:32.653 user 0m43.240s 00:10:32.653 sys 0m6.419s 00:10:32.653 19:02:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.653 19:02:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.653 ************************************ 00:10:32.653 END TEST nvmf_connect_stress 00:10:32.653 ************************************ 00:10:32.653 19:02:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:32.653 19:02:35 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:32.653 19:02:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:32.653 19:02:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.653 19:02:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:32.653 ************************************ 00:10:32.653 START TEST nvmf_fused_ordering 00:10:32.653 ************************************ 00:10:32.653 19:02:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:32.913 * Looking for test storage... 00:10:32.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.913 19:02:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:32.914 19:02:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:38.200 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.200 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:38.200 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:38.200 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:38.200 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:38.200 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:38.200 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:38.200 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:38.200 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:38.201 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:38.201 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:38.201 Found net devices under 0000:86:00.0: cvl_0_0 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.201 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:38.462 Found net devices under 0000:86:00.1: cvl_0_1 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:38.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:10:38.462 00:10:38.462 --- 10.0.0.2 ping statistics --- 00:10:38.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.462 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:10:38.462 00:10:38.462 --- 10.0.0.1 ping statistics --- 00:10:38.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.462 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:38.462 19:02:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:38.462 19:02:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:38.462 19:02:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:38.462 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:38.462 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:38.722 19:02:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=207824 00:10:38.722 19:02:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 207824 00:10:38.722 19:02:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:38.722 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 207824 ']' 00:10:38.722 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.722 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:38.722 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.722 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:38.722 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:38.722 [2024-07-12 19:02:41.084418] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:10:38.723 [2024-07-12 19:02:41.084463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.723 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.723 [2024-07-12 19:02:41.154083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.723 [2024-07-12 19:02:41.232698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.723 [2024-07-12 19:02:41.232735] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.723 [2024-07-12 19:02:41.232742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.723 [2024-07-12 19:02:41.232749] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.723 [2024-07-12 19:02:41.232754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.723 [2024-07-12 19:02:41.232772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:39.660 [2024-07-12 19:02:41.927687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:39.660 [2024-07-12 19:02:41.947838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:39.660 NULL1 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.660 19:02:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:39.660 [2024-07-12 19:02:42.001457] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:10:39.660 [2024-07-12 19:02:42.001493] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid208069 ] 00:10:39.660 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.919 Attached to nqn.2016-06.io.spdk:cnode1 00:10:39.919 Namespace ID: 1 size: 1GB 00:10:39.919 fused_ordering(0) 00:10:39.919 fused_ordering(1) 00:10:39.919 fused_ordering(2) 00:10:39.919 fused_ordering(3) 00:10:39.919 fused_ordering(4) 00:10:39.919 fused_ordering(5) 00:10:39.919 fused_ordering(6) 00:10:39.919 fused_ordering(7) 00:10:39.919 fused_ordering(8) 00:10:39.919 fused_ordering(9) 00:10:39.919 fused_ordering(10) 00:10:39.919 fused_ordering(11) 00:10:39.919 fused_ordering(12) 00:10:39.919 fused_ordering(13) 00:10:39.919 fused_ordering(14) 00:10:39.919 fused_ordering(15) 00:10:39.919 fused_ordering(16) 00:10:39.919 fused_ordering(17) 00:10:39.919 fused_ordering(18) 00:10:39.919 fused_ordering(19) 00:10:39.919 fused_ordering(20) 00:10:39.919 fused_ordering(21) 00:10:39.919 fused_ordering(22) 00:10:39.919 fused_ordering(23) 00:10:39.919 fused_ordering(24) 00:10:39.919 fused_ordering(25) 00:10:39.919 fused_ordering(26) 00:10:39.919 fused_ordering(27) 00:10:39.919 fused_ordering(28) 00:10:39.919 fused_ordering(29) 00:10:39.919 fused_ordering(30) 00:10:39.919 fused_ordering(31) 00:10:39.919 fused_ordering(32) 00:10:39.919 fused_ordering(33) 00:10:39.919 fused_ordering(34) 00:10:39.919 fused_ordering(35) 00:10:39.919 fused_ordering(36) 00:10:39.919 fused_ordering(37) 00:10:39.919 fused_ordering(38) 00:10:39.919 fused_ordering(39) 00:10:39.919 fused_ordering(40) 00:10:39.919 fused_ordering(41) 00:10:39.919 fused_ordering(42) 00:10:39.919 fused_ordering(43) 00:10:39.919 fused_ordering(44) 00:10:39.919 fused_ordering(45) 00:10:39.919 fused_ordering(46) 00:10:39.919 fused_ordering(47) 00:10:39.919 fused_ordering(48) 00:10:39.919 fused_ordering(49) 00:10:39.919 fused_ordering(50) 00:10:39.919 fused_ordering(51) 00:10:39.919 fused_ordering(52) 00:10:39.919 fused_ordering(53) 00:10:39.919 fused_ordering(54) 00:10:39.919 fused_ordering(55) 00:10:39.919 fused_ordering(56) 00:10:39.919 fused_ordering(57) 00:10:39.919 fused_ordering(58) 00:10:39.919 fused_ordering(59) 00:10:39.919 fused_ordering(60) 00:10:39.919 fused_ordering(61) 00:10:39.919 fused_ordering(62) 00:10:39.919 fused_ordering(63) 00:10:39.919 fused_ordering(64) 00:10:39.919 fused_ordering(65) 00:10:39.920 fused_ordering(66) 00:10:39.920 fused_ordering(67) 00:10:39.920 fused_ordering(68) 00:10:39.920 fused_ordering(69) 00:10:39.920 fused_ordering(70) 00:10:39.920 fused_ordering(71) 00:10:39.920 fused_ordering(72) 00:10:39.920 fused_ordering(73) 00:10:39.920 fused_ordering(74) 00:10:39.920 fused_ordering(75) 00:10:39.920 fused_ordering(76) 00:10:39.920 fused_ordering(77) 00:10:39.920 fused_ordering(78) 00:10:39.920 fused_ordering(79) 00:10:39.920 fused_ordering(80) 00:10:39.920 fused_ordering(81) 00:10:39.920 fused_ordering(82) 00:10:39.920 fused_ordering(83) 00:10:39.920 fused_ordering(84) 00:10:39.920 fused_ordering(85) 00:10:39.920 fused_ordering(86) 00:10:39.920 fused_ordering(87) 00:10:39.920 fused_ordering(88) 00:10:39.920 fused_ordering(89) 00:10:39.920 fused_ordering(90) 00:10:39.920 fused_ordering(91) 00:10:39.920 fused_ordering(92) 00:10:39.920 fused_ordering(93) 00:10:39.920 fused_ordering(94) 00:10:39.920 fused_ordering(95) 00:10:39.920 fused_ordering(96) 00:10:39.920 fused_ordering(97) 00:10:39.920 fused_ordering(98) 00:10:39.920 fused_ordering(99) 00:10:39.920 fused_ordering(100) 00:10:39.920 fused_ordering(101) 00:10:39.920 fused_ordering(102) 00:10:39.920 fused_ordering(103) 00:10:39.920 fused_ordering(104) 00:10:39.920 fused_ordering(105) 00:10:39.920 fused_ordering(106) 00:10:39.920 fused_ordering(107) 00:10:39.920 fused_ordering(108) 00:10:39.920 fused_ordering(109) 00:10:39.920 fused_ordering(110) 00:10:39.920 fused_ordering(111) 00:10:39.920 fused_ordering(112) 00:10:39.920 fused_ordering(113) 00:10:39.920 fused_ordering(114) 00:10:39.920 fused_ordering(115) 00:10:39.920 fused_ordering(116) 00:10:39.920 fused_ordering(117) 00:10:39.920 fused_ordering(118) 00:10:39.920 fused_ordering(119) 00:10:39.920 fused_ordering(120) 00:10:39.920 fused_ordering(121) 00:10:39.920 fused_ordering(122) 00:10:39.920 fused_ordering(123) 00:10:39.920 fused_ordering(124) 00:10:39.920 fused_ordering(125) 00:10:39.920 fused_ordering(126) 00:10:39.920 fused_ordering(127) 00:10:39.920 fused_ordering(128) 00:10:39.920 fused_ordering(129) 00:10:39.920 fused_ordering(130) 00:10:39.920 fused_ordering(131) 00:10:39.920 fused_ordering(132) 00:10:39.920 fused_ordering(133) 00:10:39.920 fused_ordering(134) 00:10:39.920 fused_ordering(135) 00:10:39.920 fused_ordering(136) 00:10:39.920 fused_ordering(137) 00:10:39.920 fused_ordering(138) 00:10:39.920 fused_ordering(139) 00:10:39.920 fused_ordering(140) 00:10:39.920 fused_ordering(141) 00:10:39.920 fused_ordering(142) 00:10:39.920 fused_ordering(143) 00:10:39.920 fused_ordering(144) 00:10:39.920 fused_ordering(145) 00:10:39.920 fused_ordering(146) 00:10:39.920 fused_ordering(147) 00:10:39.920 fused_ordering(148) 00:10:39.920 fused_ordering(149) 00:10:39.920 fused_ordering(150) 00:10:39.920 fused_ordering(151) 00:10:39.920 fused_ordering(152) 00:10:39.920 fused_ordering(153) 00:10:39.920 fused_ordering(154) 00:10:39.920 fused_ordering(155) 00:10:39.920 fused_ordering(156) 00:10:39.920 fused_ordering(157) 00:10:39.920 fused_ordering(158) 00:10:39.920 fused_ordering(159) 00:10:39.920 fused_ordering(160) 00:10:39.920 fused_ordering(161) 00:10:39.920 fused_ordering(162) 00:10:39.920 fused_ordering(163) 00:10:39.920 fused_ordering(164) 00:10:39.920 fused_ordering(165) 00:10:39.920 fused_ordering(166) 00:10:39.920 fused_ordering(167) 00:10:39.920 fused_ordering(168) 00:10:39.920 fused_ordering(169) 00:10:39.920 fused_ordering(170) 00:10:39.920 fused_ordering(171) 00:10:39.920 fused_ordering(172) 00:10:39.920 fused_ordering(173) 00:10:39.920 fused_ordering(174) 00:10:39.920 fused_ordering(175) 00:10:39.920 fused_ordering(176) 00:10:39.920 fused_ordering(177) 00:10:39.920 fused_ordering(178) 00:10:39.920 fused_ordering(179) 00:10:39.920 fused_ordering(180) 00:10:39.920 fused_ordering(181) 00:10:39.920 fused_ordering(182) 00:10:39.920 fused_ordering(183) 00:10:39.920 fused_ordering(184) 00:10:39.920 fused_ordering(185) 00:10:39.920 fused_ordering(186) 00:10:39.920 fused_ordering(187) 00:10:39.920 fused_ordering(188) 00:10:39.920 fused_ordering(189) 00:10:39.920 fused_ordering(190) 00:10:39.920 fused_ordering(191) 00:10:39.920 fused_ordering(192) 00:10:39.920 fused_ordering(193) 00:10:39.920 fused_ordering(194) 00:10:39.920 fused_ordering(195) 00:10:39.920 fused_ordering(196) 00:10:39.920 fused_ordering(197) 00:10:39.920 fused_ordering(198) 00:10:39.920 fused_ordering(199) 00:10:39.920 fused_ordering(200) 00:10:39.920 fused_ordering(201) 00:10:39.920 fused_ordering(202) 00:10:39.920 fused_ordering(203) 00:10:39.920 fused_ordering(204) 00:10:39.920 fused_ordering(205) 00:10:40.179 fused_ordering(206) 00:10:40.179 fused_ordering(207) 00:10:40.179 fused_ordering(208) 00:10:40.179 fused_ordering(209) 00:10:40.179 fused_ordering(210) 00:10:40.179 fused_ordering(211) 00:10:40.179 fused_ordering(212) 00:10:40.179 fused_ordering(213) 00:10:40.179 fused_ordering(214) 00:10:40.179 fused_ordering(215) 00:10:40.179 fused_ordering(216) 00:10:40.179 fused_ordering(217) 00:10:40.179 fused_ordering(218) 00:10:40.179 fused_ordering(219) 00:10:40.179 fused_ordering(220) 00:10:40.179 fused_ordering(221) 00:10:40.179 fused_ordering(222) 00:10:40.179 fused_ordering(223) 00:10:40.179 fused_ordering(224) 00:10:40.179 fused_ordering(225) 00:10:40.179 fused_ordering(226) 00:10:40.179 fused_ordering(227) 00:10:40.179 fused_ordering(228) 00:10:40.179 fused_ordering(229) 00:10:40.179 fused_ordering(230) 00:10:40.179 fused_ordering(231) 00:10:40.179 fused_ordering(232) 00:10:40.179 fused_ordering(233) 00:10:40.179 fused_ordering(234) 00:10:40.179 fused_ordering(235) 00:10:40.179 fused_ordering(236) 00:10:40.179 fused_ordering(237) 00:10:40.179 fused_ordering(238) 00:10:40.179 fused_ordering(239) 00:10:40.179 fused_ordering(240) 00:10:40.179 fused_ordering(241) 00:10:40.179 fused_ordering(242) 00:10:40.179 fused_ordering(243) 00:10:40.179 fused_ordering(244) 00:10:40.179 fused_ordering(245) 00:10:40.179 fused_ordering(246) 00:10:40.179 fused_ordering(247) 00:10:40.179 fused_ordering(248) 00:10:40.179 fused_ordering(249) 00:10:40.179 fused_ordering(250) 00:10:40.179 fused_ordering(251) 00:10:40.179 fused_ordering(252) 00:10:40.179 fused_ordering(253) 00:10:40.179 fused_ordering(254) 00:10:40.179 fused_ordering(255) 00:10:40.179 fused_ordering(256) 00:10:40.179 fused_ordering(257) 00:10:40.179 fused_ordering(258) 00:10:40.179 fused_ordering(259) 00:10:40.179 fused_ordering(260) 00:10:40.179 fused_ordering(261) 00:10:40.179 fused_ordering(262) 00:10:40.179 fused_ordering(263) 00:10:40.179 fused_ordering(264) 00:10:40.179 fused_ordering(265) 00:10:40.179 fused_ordering(266) 00:10:40.179 fused_ordering(267) 00:10:40.179 fused_ordering(268) 00:10:40.179 fused_ordering(269) 00:10:40.179 fused_ordering(270) 00:10:40.179 fused_ordering(271) 00:10:40.179 fused_ordering(272) 00:10:40.179 fused_ordering(273) 00:10:40.180 fused_ordering(274) 00:10:40.180 fused_ordering(275) 00:10:40.180 fused_ordering(276) 00:10:40.180 fused_ordering(277) 00:10:40.180 fused_ordering(278) 00:10:40.180 fused_ordering(279) 00:10:40.180 fused_ordering(280) 00:10:40.180 fused_ordering(281) 00:10:40.180 fused_ordering(282) 00:10:40.180 fused_ordering(283) 00:10:40.180 fused_ordering(284) 00:10:40.180 fused_ordering(285) 00:10:40.180 fused_ordering(286) 00:10:40.180 fused_ordering(287) 00:10:40.180 fused_ordering(288) 00:10:40.180 fused_ordering(289) 00:10:40.180 fused_ordering(290) 00:10:40.180 fused_ordering(291) 00:10:40.180 fused_ordering(292) 00:10:40.180 fused_ordering(293) 00:10:40.180 fused_ordering(294) 00:10:40.180 fused_ordering(295) 00:10:40.180 fused_ordering(296) 00:10:40.180 fused_ordering(297) 00:10:40.180 fused_ordering(298) 00:10:40.180 fused_ordering(299) 00:10:40.180 fused_ordering(300) 00:10:40.180 fused_ordering(301) 00:10:40.180 fused_ordering(302) 00:10:40.180 fused_ordering(303) 00:10:40.180 fused_ordering(304) 00:10:40.180 fused_ordering(305) 00:10:40.180 fused_ordering(306) 00:10:40.180 fused_ordering(307) 00:10:40.180 fused_ordering(308) 00:10:40.180 fused_ordering(309) 00:10:40.180 fused_ordering(310) 00:10:40.180 fused_ordering(311) 00:10:40.180 fused_ordering(312) 00:10:40.180 fused_ordering(313) 00:10:40.180 fused_ordering(314) 00:10:40.180 fused_ordering(315) 00:10:40.180 fused_ordering(316) 00:10:40.180 fused_ordering(317) 00:10:40.180 fused_ordering(318) 00:10:40.180 fused_ordering(319) 00:10:40.180 fused_ordering(320) 00:10:40.180 fused_ordering(321) 00:10:40.180 fused_ordering(322) 00:10:40.180 fused_ordering(323) 00:10:40.180 fused_ordering(324) 00:10:40.180 fused_ordering(325) 00:10:40.180 fused_ordering(326) 00:10:40.180 fused_ordering(327) 00:10:40.180 fused_ordering(328) 00:10:40.180 fused_ordering(329) 00:10:40.180 fused_ordering(330) 00:10:40.180 fused_ordering(331) 00:10:40.180 fused_ordering(332) 00:10:40.180 fused_ordering(333) 00:10:40.180 fused_ordering(334) 00:10:40.180 fused_ordering(335) 00:10:40.180 fused_ordering(336) 00:10:40.180 fused_ordering(337) 00:10:40.180 fused_ordering(338) 00:10:40.180 fused_ordering(339) 00:10:40.180 fused_ordering(340) 00:10:40.180 fused_ordering(341) 00:10:40.180 fused_ordering(342) 00:10:40.180 fused_ordering(343) 00:10:40.180 fused_ordering(344) 00:10:40.180 fused_ordering(345) 00:10:40.180 fused_ordering(346) 00:10:40.180 fused_ordering(347) 00:10:40.180 fused_ordering(348) 00:10:40.180 fused_ordering(349) 00:10:40.180 fused_ordering(350) 00:10:40.180 fused_ordering(351) 00:10:40.180 fused_ordering(352) 00:10:40.180 fused_ordering(353) 00:10:40.180 fused_ordering(354) 00:10:40.180 fused_ordering(355) 00:10:40.180 fused_ordering(356) 00:10:40.180 fused_ordering(357) 00:10:40.180 fused_ordering(358) 00:10:40.180 fused_ordering(359) 00:10:40.180 fused_ordering(360) 00:10:40.180 fused_ordering(361) 00:10:40.180 fused_ordering(362) 00:10:40.180 fused_ordering(363) 00:10:40.180 fused_ordering(364) 00:10:40.180 fused_ordering(365) 00:10:40.180 fused_ordering(366) 00:10:40.180 fused_ordering(367) 00:10:40.180 fused_ordering(368) 00:10:40.180 fused_ordering(369) 00:10:40.180 fused_ordering(370) 00:10:40.180 fused_ordering(371) 00:10:40.180 fused_ordering(372) 00:10:40.180 fused_ordering(373) 00:10:40.180 fused_ordering(374) 00:10:40.180 fused_ordering(375) 00:10:40.180 fused_ordering(376) 00:10:40.180 fused_ordering(377) 00:10:40.180 fused_ordering(378) 00:10:40.180 fused_ordering(379) 00:10:40.180 fused_ordering(380) 00:10:40.180 fused_ordering(381) 00:10:40.180 fused_ordering(382) 00:10:40.180 fused_ordering(383) 00:10:40.180 fused_ordering(384) 00:10:40.180 fused_ordering(385) 00:10:40.180 fused_ordering(386) 00:10:40.180 fused_ordering(387) 00:10:40.180 fused_ordering(388) 00:10:40.180 fused_ordering(389) 00:10:40.180 fused_ordering(390) 00:10:40.180 fused_ordering(391) 00:10:40.180 fused_ordering(392) 00:10:40.180 fused_ordering(393) 00:10:40.180 fused_ordering(394) 00:10:40.180 fused_ordering(395) 00:10:40.180 fused_ordering(396) 00:10:40.180 fused_ordering(397) 00:10:40.180 fused_ordering(398) 00:10:40.180 fused_ordering(399) 00:10:40.180 fused_ordering(400) 00:10:40.180 fused_ordering(401) 00:10:40.180 fused_ordering(402) 00:10:40.180 fused_ordering(403) 00:10:40.180 fused_ordering(404) 00:10:40.180 fused_ordering(405) 00:10:40.180 fused_ordering(406) 00:10:40.180 fused_ordering(407) 00:10:40.180 fused_ordering(408) 00:10:40.180 fused_ordering(409) 00:10:40.180 fused_ordering(410) 00:10:40.440 fused_ordering(411) 00:10:40.440 fused_ordering(412) 00:10:40.440 fused_ordering(413) 00:10:40.440 fused_ordering(414) 00:10:40.440 fused_ordering(415) 00:10:40.440 fused_ordering(416) 00:10:40.440 fused_ordering(417) 00:10:40.440 fused_ordering(418) 00:10:40.440 fused_ordering(419) 00:10:40.440 fused_ordering(420) 00:10:40.440 fused_ordering(421) 00:10:40.440 fused_ordering(422) 00:10:40.440 fused_ordering(423) 00:10:40.440 fused_ordering(424) 00:10:40.440 fused_ordering(425) 00:10:40.440 fused_ordering(426) 00:10:40.440 fused_ordering(427) 00:10:40.440 fused_ordering(428) 00:10:40.440 fused_ordering(429) 00:10:40.440 fused_ordering(430) 00:10:40.440 fused_ordering(431) 00:10:40.440 fused_ordering(432) 00:10:40.440 fused_ordering(433) 00:10:40.440 fused_ordering(434) 00:10:40.440 fused_ordering(435) 00:10:40.440 fused_ordering(436) 00:10:40.440 fused_ordering(437) 00:10:40.440 fused_ordering(438) 00:10:40.440 fused_ordering(439) 00:10:40.440 fused_ordering(440) 00:10:40.440 fused_ordering(441) 00:10:40.440 fused_ordering(442) 00:10:40.440 fused_ordering(443) 00:10:40.440 fused_ordering(444) 00:10:40.440 fused_ordering(445) 00:10:40.440 fused_ordering(446) 00:10:40.440 fused_ordering(447) 00:10:40.440 fused_ordering(448) 00:10:40.440 fused_ordering(449) 00:10:40.440 fused_ordering(450) 00:10:40.440 fused_ordering(451) 00:10:40.440 fused_ordering(452) 00:10:40.440 fused_ordering(453) 00:10:40.440 fused_ordering(454) 00:10:40.440 fused_ordering(455) 00:10:40.440 fused_ordering(456) 00:10:40.440 fused_ordering(457) 00:10:40.440 fused_ordering(458) 00:10:40.440 fused_ordering(459) 00:10:40.440 fused_ordering(460) 00:10:40.440 fused_ordering(461) 00:10:40.440 fused_ordering(462) 00:10:40.440 fused_ordering(463) 00:10:40.440 fused_ordering(464) 00:10:40.440 fused_ordering(465) 00:10:40.440 fused_ordering(466) 00:10:40.440 fused_ordering(467) 00:10:40.440 fused_ordering(468) 00:10:40.440 fused_ordering(469) 00:10:40.440 fused_ordering(470) 00:10:40.440 fused_ordering(471) 00:10:40.440 fused_ordering(472) 00:10:40.440 fused_ordering(473) 00:10:40.440 fused_ordering(474) 00:10:40.440 fused_ordering(475) 00:10:40.440 fused_ordering(476) 00:10:40.440 fused_ordering(477) 00:10:40.440 fused_ordering(478) 00:10:40.440 fused_ordering(479) 00:10:40.440 fused_ordering(480) 00:10:40.440 fused_ordering(481) 00:10:40.440 fused_ordering(482) 00:10:40.440 fused_ordering(483) 00:10:40.440 fused_ordering(484) 00:10:40.440 fused_ordering(485) 00:10:40.440 fused_ordering(486) 00:10:40.440 fused_ordering(487) 00:10:40.440 fused_ordering(488) 00:10:40.440 fused_ordering(489) 00:10:40.440 fused_ordering(490) 00:10:40.440 fused_ordering(491) 00:10:40.440 fused_ordering(492) 00:10:40.440 fused_ordering(493) 00:10:40.440 fused_ordering(494) 00:10:40.440 fused_ordering(495) 00:10:40.440 fused_ordering(496) 00:10:40.440 fused_ordering(497) 00:10:40.440 fused_ordering(498) 00:10:40.440 fused_ordering(499) 00:10:40.440 fused_ordering(500) 00:10:40.440 fused_ordering(501) 00:10:40.440 fused_ordering(502) 00:10:40.440 fused_ordering(503) 00:10:40.440 fused_ordering(504) 00:10:40.440 fused_ordering(505) 00:10:40.440 fused_ordering(506) 00:10:40.440 fused_ordering(507) 00:10:40.440 fused_ordering(508) 00:10:40.440 fused_ordering(509) 00:10:40.440 fused_ordering(510) 00:10:40.440 fused_ordering(511) 00:10:40.440 fused_ordering(512) 00:10:40.440 fused_ordering(513) 00:10:40.440 fused_ordering(514) 00:10:40.440 fused_ordering(515) 00:10:40.440 fused_ordering(516) 00:10:40.440 fused_ordering(517) 00:10:40.440 fused_ordering(518) 00:10:40.440 fused_ordering(519) 00:10:40.440 fused_ordering(520) 00:10:40.440 fused_ordering(521) 00:10:40.440 fused_ordering(522) 00:10:40.440 fused_ordering(523) 00:10:40.440 fused_ordering(524) 00:10:40.440 fused_ordering(525) 00:10:40.440 fused_ordering(526) 00:10:40.440 fused_ordering(527) 00:10:40.440 fused_ordering(528) 00:10:40.440 fused_ordering(529) 00:10:40.440 fused_ordering(530) 00:10:40.440 fused_ordering(531) 00:10:40.440 fused_ordering(532) 00:10:40.440 fused_ordering(533) 00:10:40.440 fused_ordering(534) 00:10:40.440 fused_ordering(535) 00:10:40.440 fused_ordering(536) 00:10:40.440 fused_ordering(537) 00:10:40.440 fused_ordering(538) 00:10:40.440 fused_ordering(539) 00:10:40.440 fused_ordering(540) 00:10:40.440 fused_ordering(541) 00:10:40.440 fused_ordering(542) 00:10:40.440 fused_ordering(543) 00:10:40.440 fused_ordering(544) 00:10:40.440 fused_ordering(545) 00:10:40.440 fused_ordering(546) 00:10:40.440 fused_ordering(547) 00:10:40.440 fused_ordering(548) 00:10:40.440 fused_ordering(549) 00:10:40.440 fused_ordering(550) 00:10:40.440 fused_ordering(551) 00:10:40.440 fused_ordering(552) 00:10:40.440 fused_ordering(553) 00:10:40.440 fused_ordering(554) 00:10:40.440 fused_ordering(555) 00:10:40.440 fused_ordering(556) 00:10:40.440 fused_ordering(557) 00:10:40.440 fused_ordering(558) 00:10:40.440 fused_ordering(559) 00:10:40.440 fused_ordering(560) 00:10:40.440 fused_ordering(561) 00:10:40.440 fused_ordering(562) 00:10:40.440 fused_ordering(563) 00:10:40.440 fused_ordering(564) 00:10:40.440 fused_ordering(565) 00:10:40.440 fused_ordering(566) 00:10:40.440 fused_ordering(567) 00:10:40.440 fused_ordering(568) 00:10:40.440 fused_ordering(569) 00:10:40.440 fused_ordering(570) 00:10:40.440 fused_ordering(571) 00:10:40.440 fused_ordering(572) 00:10:40.440 fused_ordering(573) 00:10:40.440 fused_ordering(574) 00:10:40.440 fused_ordering(575) 00:10:40.440 fused_ordering(576) 00:10:40.441 fused_ordering(577) 00:10:40.441 fused_ordering(578) 00:10:40.441 fused_ordering(579) 00:10:40.441 fused_ordering(580) 00:10:40.441 fused_ordering(581) 00:10:40.441 fused_ordering(582) 00:10:40.441 fused_ordering(583) 00:10:40.441 fused_ordering(584) 00:10:40.441 fused_ordering(585) 00:10:40.441 fused_ordering(586) 00:10:40.441 fused_ordering(587) 00:10:40.441 fused_ordering(588) 00:10:40.441 fused_ordering(589) 00:10:40.441 fused_ordering(590) 00:10:40.441 fused_ordering(591) 00:10:40.441 fused_ordering(592) 00:10:40.441 fused_ordering(593) 00:10:40.441 fused_ordering(594) 00:10:40.441 fused_ordering(595) 00:10:40.441 fused_ordering(596) 00:10:40.441 fused_ordering(597) 00:10:40.441 fused_ordering(598) 00:10:40.441 fused_ordering(599) 00:10:40.441 fused_ordering(600) 00:10:40.441 fused_ordering(601) 00:10:40.441 fused_ordering(602) 00:10:40.441 fused_ordering(603) 00:10:40.441 fused_ordering(604) 00:10:40.441 fused_ordering(605) 00:10:40.441 fused_ordering(606) 00:10:40.441 fused_ordering(607) 00:10:40.441 fused_ordering(608) 00:10:40.441 fused_ordering(609) 00:10:40.441 fused_ordering(610) 00:10:40.441 fused_ordering(611) 00:10:40.441 fused_ordering(612) 00:10:40.441 fused_ordering(613) 00:10:40.441 fused_ordering(614) 00:10:40.441 fused_ordering(615) 00:10:40.701 fused_ordering(616) 00:10:40.701 fused_ordering(617) 00:10:40.701 fused_ordering(618) 00:10:40.701 fused_ordering(619) 00:10:40.701 fused_ordering(620) 00:10:40.701 fused_ordering(621) 00:10:40.701 fused_ordering(622) 00:10:40.701 fused_ordering(623) 00:10:40.701 fused_ordering(624) 00:10:40.701 fused_ordering(625) 00:10:40.701 fused_ordering(626) 00:10:40.701 fused_ordering(627) 00:10:40.701 fused_ordering(628) 00:10:40.701 fused_ordering(629) 00:10:40.701 fused_ordering(630) 00:10:40.701 fused_ordering(631) 00:10:40.701 fused_ordering(632) 00:10:40.701 fused_ordering(633) 00:10:40.701 fused_ordering(634) 00:10:40.701 fused_ordering(635) 00:10:40.701 fused_ordering(636) 00:10:40.701 fused_ordering(637) 00:10:40.701 fused_ordering(638) 00:10:40.701 fused_ordering(639) 00:10:40.701 fused_ordering(640) 00:10:40.701 fused_ordering(641) 00:10:40.701 fused_ordering(642) 00:10:40.701 fused_ordering(643) 00:10:40.701 fused_ordering(644) 00:10:40.701 fused_ordering(645) 00:10:40.701 fused_ordering(646) 00:10:40.701 fused_ordering(647) 00:10:40.701 fused_ordering(648) 00:10:40.701 fused_ordering(649) 00:10:40.701 fused_ordering(650) 00:10:40.701 fused_ordering(651) 00:10:40.701 fused_ordering(652) 00:10:40.701 fused_ordering(653) 00:10:40.701 fused_ordering(654) 00:10:40.701 fused_ordering(655) 00:10:40.701 fused_ordering(656) 00:10:40.701 fused_ordering(657) 00:10:40.701 fused_ordering(658) 00:10:40.701 fused_ordering(659) 00:10:40.701 fused_ordering(660) 00:10:40.701 fused_ordering(661) 00:10:40.701 fused_ordering(662) 00:10:40.701 fused_ordering(663) 00:10:40.701 fused_ordering(664) 00:10:40.701 fused_ordering(665) 00:10:40.701 fused_ordering(666) 00:10:40.701 fused_ordering(667) 00:10:40.701 fused_ordering(668) 00:10:40.701 fused_ordering(669) 00:10:40.701 fused_ordering(670) 00:10:40.701 fused_ordering(671) 00:10:40.701 fused_ordering(672) 00:10:40.701 fused_ordering(673) 00:10:40.701 fused_ordering(674) 00:10:40.701 fused_ordering(675) 00:10:40.701 fused_ordering(676) 00:10:40.701 fused_ordering(677) 00:10:40.701 fused_ordering(678) 00:10:40.701 fused_ordering(679) 00:10:40.701 fused_ordering(680) 00:10:40.701 fused_ordering(681) 00:10:40.701 fused_ordering(682) 00:10:40.701 fused_ordering(683) 00:10:40.701 fused_ordering(684) 00:10:40.701 fused_ordering(685) 00:10:40.701 fused_ordering(686) 00:10:40.701 fused_ordering(687) 00:10:40.701 fused_ordering(688) 00:10:40.701 fused_ordering(689) 00:10:40.701 fused_ordering(690) 00:10:40.701 fused_ordering(691) 00:10:40.701 fused_ordering(692) 00:10:40.701 fused_ordering(693) 00:10:40.701 fused_ordering(694) 00:10:40.701 fused_ordering(695) 00:10:40.701 fused_ordering(696) 00:10:40.701 fused_ordering(697) 00:10:40.701 fused_ordering(698) 00:10:40.701 fused_ordering(699) 00:10:40.701 fused_ordering(700) 00:10:40.701 fused_ordering(701) 00:10:40.701 fused_ordering(702) 00:10:40.701 fused_ordering(703) 00:10:40.701 fused_ordering(704) 00:10:40.701 fused_ordering(705) 00:10:40.701 fused_ordering(706) 00:10:40.701 fused_ordering(707) 00:10:40.701 fused_ordering(708) 00:10:40.701 fused_ordering(709) 00:10:40.701 fused_ordering(710) 00:10:40.701 fused_ordering(711) 00:10:40.701 fused_ordering(712) 00:10:40.701 fused_ordering(713) 00:10:40.701 fused_ordering(714) 00:10:40.701 fused_ordering(715) 00:10:40.701 fused_ordering(716) 00:10:40.701 fused_ordering(717) 00:10:40.701 fused_ordering(718) 00:10:40.701 fused_ordering(719) 00:10:40.701 fused_ordering(720) 00:10:40.701 fused_ordering(721) 00:10:40.701 fused_ordering(722) 00:10:40.701 fused_ordering(723) 00:10:40.701 fused_ordering(724) 00:10:40.701 fused_ordering(725) 00:10:40.701 fused_ordering(726) 00:10:40.701 fused_ordering(727) 00:10:40.701 fused_ordering(728) 00:10:40.701 fused_ordering(729) 00:10:40.701 fused_ordering(730) 00:10:40.701 fused_ordering(731) 00:10:40.701 fused_ordering(732) 00:10:40.701 fused_ordering(733) 00:10:40.701 fused_ordering(734) 00:10:40.701 fused_ordering(735) 00:10:40.701 fused_ordering(736) 00:10:40.701 fused_ordering(737) 00:10:40.701 fused_ordering(738) 00:10:40.701 fused_ordering(739) 00:10:40.701 fused_ordering(740) 00:10:40.701 fused_ordering(741) 00:10:40.701 fused_ordering(742) 00:10:40.701 fused_ordering(743) 00:10:40.701 fused_ordering(744) 00:10:40.701 fused_ordering(745) 00:10:40.701 fused_ordering(746) 00:10:40.701 fused_ordering(747) 00:10:40.701 fused_ordering(748) 00:10:40.701 fused_ordering(749) 00:10:40.701 fused_ordering(750) 00:10:40.701 fused_ordering(751) 00:10:40.701 fused_ordering(752) 00:10:40.701 fused_ordering(753) 00:10:40.701 fused_ordering(754) 00:10:40.701 fused_ordering(755) 00:10:40.701 fused_ordering(756) 00:10:40.701 fused_ordering(757) 00:10:40.701 fused_ordering(758) 00:10:40.701 fused_ordering(759) 00:10:40.701 fused_ordering(760) 00:10:40.701 fused_ordering(761) 00:10:40.701 fused_ordering(762) 00:10:40.701 fused_ordering(763) 00:10:40.701 fused_ordering(764) 00:10:40.701 fused_ordering(765) 00:10:40.701 fused_ordering(766) 00:10:40.701 fused_ordering(767) 00:10:40.701 fused_ordering(768) 00:10:40.701 fused_ordering(769) 00:10:40.701 fused_ordering(770) 00:10:40.701 fused_ordering(771) 00:10:40.701 fused_ordering(772) 00:10:40.701 fused_ordering(773) 00:10:40.701 fused_ordering(774) 00:10:40.701 fused_ordering(775) 00:10:40.701 fused_ordering(776) 00:10:40.701 fused_ordering(777) 00:10:40.701 fused_ordering(778) 00:10:40.701 fused_ordering(779) 00:10:40.701 fused_ordering(780) 00:10:40.701 fused_ordering(781) 00:10:40.701 fused_ordering(782) 00:10:40.701 fused_ordering(783) 00:10:40.701 fused_ordering(784) 00:10:40.701 fused_ordering(785) 00:10:40.701 fused_ordering(786) 00:10:40.701 fused_ordering(787) 00:10:40.701 fused_ordering(788) 00:10:40.701 fused_ordering(789) 00:10:40.701 fused_ordering(790) 00:10:40.701 fused_ordering(791) 00:10:40.701 fused_ordering(792) 00:10:40.701 fused_ordering(793) 00:10:40.701 fused_ordering(794) 00:10:40.701 fused_ordering(795) 00:10:40.701 fused_ordering(796) 00:10:40.701 fused_ordering(797) 00:10:40.701 fused_ordering(798) 00:10:40.701 fused_ordering(799) 00:10:40.701 fused_ordering(800) 00:10:40.701 fused_ordering(801) 00:10:40.701 fused_ordering(802) 00:10:40.701 fused_ordering(803) 00:10:40.701 fused_ordering(804) 00:10:40.701 fused_ordering(805) 00:10:40.702 fused_ordering(806) 00:10:40.702 fused_ordering(807) 00:10:40.702 fused_ordering(808) 00:10:40.702 fused_ordering(809) 00:10:40.702 fused_ordering(810) 00:10:40.702 fused_ordering(811) 00:10:40.702 fused_ordering(812) 00:10:40.702 fused_ordering(813) 00:10:40.702 fused_ordering(814) 00:10:40.702 fused_ordering(815) 00:10:40.702 fused_ordering(816) 00:10:40.702 fused_ordering(817) 00:10:40.702 fused_ordering(818) 00:10:40.702 fused_ordering(819) 00:10:40.702 fused_ordering(820) 00:10:41.269 fused_ordering(821) 00:10:41.269 fused_ordering(822) 00:10:41.269 fused_ordering(823) 00:10:41.269 fused_ordering(824) 00:10:41.269 fused_ordering(825) 00:10:41.269 fused_ordering(826) 00:10:41.269 fused_ordering(827) 00:10:41.269 fused_ordering(828) 00:10:41.269 fused_ordering(829) 00:10:41.269 fused_ordering(830) 00:10:41.269 fused_ordering(831) 00:10:41.269 fused_ordering(832) 00:10:41.269 fused_ordering(833) 00:10:41.269 fused_ordering(834) 00:10:41.269 fused_ordering(835) 00:10:41.269 fused_ordering(836) 00:10:41.269 fused_ordering(837) 00:10:41.269 fused_ordering(838) 00:10:41.269 fused_ordering(839) 00:10:41.269 fused_ordering(840) 00:10:41.269 fused_ordering(841) 00:10:41.269 fused_ordering(842) 00:10:41.269 fused_ordering(843) 00:10:41.269 fused_ordering(844) 00:10:41.269 fused_ordering(845) 00:10:41.269 fused_ordering(846) 00:10:41.269 fused_ordering(847) 00:10:41.269 fused_ordering(848) 00:10:41.269 fused_ordering(849) 00:10:41.269 fused_ordering(850) 00:10:41.269 fused_ordering(851) 00:10:41.269 fused_ordering(852) 00:10:41.269 fused_ordering(853) 00:10:41.269 fused_ordering(854) 00:10:41.269 fused_ordering(855) 00:10:41.269 fused_ordering(856) 00:10:41.269 fused_ordering(857) 00:10:41.269 fused_ordering(858) 00:10:41.269 fused_ordering(859) 00:10:41.269 fused_ordering(860) 00:10:41.269 fused_ordering(861) 00:10:41.269 fused_ordering(862) 00:10:41.269 fused_ordering(863) 00:10:41.269 fused_ordering(864) 00:10:41.269 fused_ordering(865) 00:10:41.269 fused_ordering(866) 00:10:41.269 fused_ordering(867) 00:10:41.269 fused_ordering(868) 00:10:41.269 fused_ordering(869) 00:10:41.269 fused_ordering(870) 00:10:41.269 fused_ordering(871) 00:10:41.269 fused_ordering(872) 00:10:41.269 fused_ordering(873) 00:10:41.269 fused_ordering(874) 00:10:41.269 fused_ordering(875) 00:10:41.269 fused_ordering(876) 00:10:41.269 fused_ordering(877) 00:10:41.269 fused_ordering(878) 00:10:41.269 fused_ordering(879) 00:10:41.269 fused_ordering(880) 00:10:41.269 fused_ordering(881) 00:10:41.269 fused_ordering(882) 00:10:41.269 fused_ordering(883) 00:10:41.269 fused_ordering(884) 00:10:41.269 fused_ordering(885) 00:10:41.269 fused_ordering(886) 00:10:41.269 fused_ordering(887) 00:10:41.269 fused_ordering(888) 00:10:41.269 fused_ordering(889) 00:10:41.269 fused_ordering(890) 00:10:41.269 fused_ordering(891) 00:10:41.269 fused_ordering(892) 00:10:41.269 fused_ordering(893) 00:10:41.269 fused_ordering(894) 00:10:41.269 fused_ordering(895) 00:10:41.269 fused_ordering(896) 00:10:41.269 fused_ordering(897) 00:10:41.269 fused_ordering(898) 00:10:41.269 fused_ordering(899) 00:10:41.269 fused_ordering(900) 00:10:41.269 fused_ordering(901) 00:10:41.269 fused_ordering(902) 00:10:41.269 fused_ordering(903) 00:10:41.269 fused_ordering(904) 00:10:41.269 fused_ordering(905) 00:10:41.269 fused_ordering(906) 00:10:41.269 fused_ordering(907) 00:10:41.269 fused_ordering(908) 00:10:41.269 fused_ordering(909) 00:10:41.269 fused_ordering(910) 00:10:41.269 fused_ordering(911) 00:10:41.269 fused_ordering(912) 00:10:41.269 fused_ordering(913) 00:10:41.269 fused_ordering(914) 00:10:41.269 fused_ordering(915) 00:10:41.269 fused_ordering(916) 00:10:41.269 fused_ordering(917) 00:10:41.269 fused_ordering(918) 00:10:41.269 fused_ordering(919) 00:10:41.269 fused_ordering(920) 00:10:41.269 fused_ordering(921) 00:10:41.269 fused_ordering(922) 00:10:41.269 fused_ordering(923) 00:10:41.269 fused_ordering(924) 00:10:41.269 fused_ordering(925) 00:10:41.269 fused_ordering(926) 00:10:41.269 fused_ordering(927) 00:10:41.269 fused_ordering(928) 00:10:41.269 fused_ordering(929) 00:10:41.269 fused_ordering(930) 00:10:41.269 fused_ordering(931) 00:10:41.269 fused_ordering(932) 00:10:41.269 fused_ordering(933) 00:10:41.269 fused_ordering(934) 00:10:41.269 fused_ordering(935) 00:10:41.269 fused_ordering(936) 00:10:41.269 fused_ordering(937) 00:10:41.269 fused_ordering(938) 00:10:41.269 fused_ordering(939) 00:10:41.269 fused_ordering(940) 00:10:41.269 fused_ordering(941) 00:10:41.269 fused_ordering(942) 00:10:41.269 fused_ordering(943) 00:10:41.269 fused_ordering(944) 00:10:41.269 fused_ordering(945) 00:10:41.269 fused_ordering(946) 00:10:41.269 fused_ordering(947) 00:10:41.269 fused_ordering(948) 00:10:41.269 fused_ordering(949) 00:10:41.269 fused_ordering(950) 00:10:41.269 fused_ordering(951) 00:10:41.269 fused_ordering(952) 00:10:41.269 fused_ordering(953) 00:10:41.269 fused_ordering(954) 00:10:41.269 fused_ordering(955) 00:10:41.269 fused_ordering(956) 00:10:41.269 fused_ordering(957) 00:10:41.269 fused_ordering(958) 00:10:41.269 fused_ordering(959) 00:10:41.269 fused_ordering(960) 00:10:41.269 fused_ordering(961) 00:10:41.269 fused_ordering(962) 00:10:41.269 fused_ordering(963) 00:10:41.269 fused_ordering(964) 00:10:41.269 fused_ordering(965) 00:10:41.269 fused_ordering(966) 00:10:41.269 fused_ordering(967) 00:10:41.269 fused_ordering(968) 00:10:41.269 fused_ordering(969) 00:10:41.269 fused_ordering(970) 00:10:41.269 fused_ordering(971) 00:10:41.269 fused_ordering(972) 00:10:41.269 fused_ordering(973) 00:10:41.269 fused_ordering(974) 00:10:41.269 fused_ordering(975) 00:10:41.269 fused_ordering(976) 00:10:41.269 fused_ordering(977) 00:10:41.269 fused_ordering(978) 00:10:41.269 fused_ordering(979) 00:10:41.269 fused_ordering(980) 00:10:41.269 fused_ordering(981) 00:10:41.269 fused_ordering(982) 00:10:41.269 fused_ordering(983) 00:10:41.269 fused_ordering(984) 00:10:41.269 fused_ordering(985) 00:10:41.269 fused_ordering(986) 00:10:41.269 fused_ordering(987) 00:10:41.269 fused_ordering(988) 00:10:41.269 fused_ordering(989) 00:10:41.269 fused_ordering(990) 00:10:41.269 fused_ordering(991) 00:10:41.269 fused_ordering(992) 00:10:41.269 fused_ordering(993) 00:10:41.269 fused_ordering(994) 00:10:41.269 fused_ordering(995) 00:10:41.269 fused_ordering(996) 00:10:41.269 fused_ordering(997) 00:10:41.269 fused_ordering(998) 00:10:41.269 fused_ordering(999) 00:10:41.269 fused_ordering(1000) 00:10:41.269 fused_ordering(1001) 00:10:41.269 fused_ordering(1002) 00:10:41.269 fused_ordering(1003) 00:10:41.269 fused_ordering(1004) 00:10:41.269 fused_ordering(1005) 00:10:41.269 fused_ordering(1006) 00:10:41.269 fused_ordering(1007) 00:10:41.269 fused_ordering(1008) 00:10:41.269 fused_ordering(1009) 00:10:41.269 fused_ordering(1010) 00:10:41.269 fused_ordering(1011) 00:10:41.269 fused_ordering(1012) 00:10:41.269 fused_ordering(1013) 00:10:41.269 fused_ordering(1014) 00:10:41.269 fused_ordering(1015) 00:10:41.269 fused_ordering(1016) 00:10:41.269 fused_ordering(1017) 00:10:41.269 fused_ordering(1018) 00:10:41.269 fused_ordering(1019) 00:10:41.269 fused_ordering(1020) 00:10:41.269 fused_ordering(1021) 00:10:41.269 fused_ordering(1022) 00:10:41.269 fused_ordering(1023) 00:10:41.269 19:02:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:41.269 19:02:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:41.269 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:41.269 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:41.270 rmmod nvme_tcp 00:10:41.270 rmmod nvme_fabrics 00:10:41.270 rmmod nvme_keyring 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 207824 ']' 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 207824 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 207824 ']' 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 207824 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 207824 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 207824' 00:10:41.270 killing process with pid 207824 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 207824 00:10:41.270 19:02:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 207824 00:10:41.529 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:41.529 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:41.529 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:41.529 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:41.529 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:41.529 19:02:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.529 19:02:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:41.529 19:02:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.437 19:02:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:43.437 00:10:43.437 real 0m10.732s 00:10:43.437 user 0m5.499s 00:10:43.437 sys 0m5.316s 00:10:43.437 19:02:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:43.437 19:02:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:43.437 ************************************ 00:10:43.437 END TEST nvmf_fused_ordering 00:10:43.437 ************************************ 00:10:43.437 19:02:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:43.437 19:02:45 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:43.437 19:02:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:43.437 19:02:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.437 19:02:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:43.697 ************************************ 00:10:43.697 START TEST nvmf_delete_subsystem 00:10:43.697 ************************************ 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:43.697 * Looking for test storage... 00:10:43.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.697 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:43.698 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:43.698 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:43.698 19:02:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:50.275 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:50.275 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:50.275 Found net devices under 0000:86:00.0: cvl_0_0 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:50.275 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:50.276 Found net devices under 0000:86:00.1: cvl_0_1 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:50.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:10:50.276 00:10:50.276 --- 10.0.0.2 ping statistics --- 00:10:50.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.276 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:10:50.276 00:10:50.276 --- 10.0.0.1 ping statistics --- 00:10:50.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.276 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=211815 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 211815 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 211815 ']' 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.276 19:02:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.276 [2024-07-12 19:02:51.946118] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:10:50.276 [2024-07-12 19:02:51.946158] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.276 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.276 [2024-07-12 19:02:52.012947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:50.276 [2024-07-12 19:02:52.089735] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.276 [2024-07-12 19:02:52.089773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.276 [2024-07-12 19:02:52.089780] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.276 [2024-07-12 19:02:52.089786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.276 [2024-07-12 19:02:52.089791] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.276 [2024-07-12 19:02:52.089842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.276 [2024-07-12 19:02:52.089842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.276 [2024-07-12 19:02:52.794274] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.276 [2024-07-12 19:02:52.814450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.276 NULL1 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.276 Delay0 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.276 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:50.535 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.535 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=212055 00:10:50.535 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:50.535 19:02:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:50.535 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.535 [2024-07-12 19:02:52.905267] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:52.443 19:02:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.443 19:02:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.443 19:02:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 [2024-07-12 19:02:54.979253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5137a0 is same with the state(5) to be set 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 [2024-07-12 19:02:54.980376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5133e0 is same with the state(5) to be set 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.443 starting I/O failed: -6 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Write completed with error (sct=0, sc=8) 00:10:52.443 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 starting I/O failed: -6 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 starting I/O failed: -6 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 starting I/O failed: -6 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 starting I/O failed: -6 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 [2024-07-12 19:02:54.984180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa9c400cfe0 is same with the state(5) to be set 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Write completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:52.444 Read completed with error (sct=0, sc=8) 00:10:53.825 [2024-07-12 19:02:55.958817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x514ac0 is same with the state(5) to be set 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 [2024-07-12 19:02:55.982438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513000 is same with the state(5) to be set 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 [2024-07-12 19:02:55.982838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5135c0 is same with the state(5) to be set 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 [2024-07-12 19:02:55.986779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa9c400d2f0 is same with the state(5) to be set 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Write completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 Read completed with error (sct=0, sc=8) 00:10:53.825 [2024-07-12 19:02:55.986933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa9c400d600 is same with the state(5) to be set 00:10:53.825 Initializing NVMe Controllers 00:10:53.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:53.825 Controller IO queue size 128, less than required. 00:10:53.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:53.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:53.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:53.826 Initialization complete. Launching workers. 00:10:53.826 ======================================================== 00:10:53.826 Latency(us) 00:10:53.826 Device Information : IOPS MiB/s Average min max 00:10:53.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.37 0.08 904222.51 483.88 1006078.37 00:10:53.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.92 0.08 956222.49 223.99 2002566.11 00:10:53.826 ======================================================== 00:10:53.826 Total : 320.28 0.16 929211.61 223.99 2002566.11 00:10:53.826 00:10:53.826 [2024-07-12 19:02:55.987587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x514ac0 (9): Bad file descriptor 00:10:53.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:53.826 19:02:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.826 19:02:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:53.826 19:02:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 212055 00:10:53.826 19:02:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 212055 00:10:54.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (212055) - No such process 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 212055 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 212055 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 212055 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.086 [2024-07-12 19:02:56.515711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.086 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.087 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=212538 00:10:54.087 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:54.087 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:54.087 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 212538 00:10:54.087 19:02:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:54.087 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.087 [2024-07-12 19:02:56.586072] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:54.655 19:02:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:54.655 19:02:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 212538 00:10:54.655 19:02:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.223 19:02:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.223 19:02:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 212538 00:10:55.223 19:02:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.482 19:02:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.482 19:02:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 212538 00:10:55.482 19:02:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:56.050 19:02:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:56.050 19:02:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 212538 00:10:56.050 19:02:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:56.618 19:02:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:56.618 19:02:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 212538 00:10:56.618 19:02:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:57.187 19:02:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:57.187 19:02:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 212538 00:10:57.187 19:02:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:57.446 Initializing NVMe Controllers 00:10:57.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:57.446 Controller IO queue size 128, less than required. 00:10:57.446 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:57.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:57.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:57.446 Initialization complete. Launching workers. 00:10:57.446 ======================================================== 00:10:57.446 Latency(us) 00:10:57.446 Device Information : IOPS MiB/s Average min max 00:10:57.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002043.24 1000124.78 1008315.55 00:10:57.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003725.75 1000159.22 1009937.17 00:10:57.446 ======================================================== 00:10:57.446 Total : 256.00 0.12 1002884.49 1000124.78 1009937.17 00:10:57.446 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 212538 00:10:57.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (212538) - No such process 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 212538 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:57.705 rmmod nvme_tcp 00:10:57.705 rmmod nvme_fabrics 00:10:57.705 rmmod nvme_keyring 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 211815 ']' 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 211815 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 211815 ']' 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 211815 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:57.705 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:57.706 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 211815 00:10:57.706 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:57.706 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:57.706 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 211815' 00:10:57.706 killing process with pid 211815 00:10:57.706 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 211815 00:10:57.706 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 211815 00:10:57.965 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:57.965 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:57.965 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:57.966 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:57.966 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:57.966 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.966 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.966 19:03:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.875 19:03:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:59.875 00:10:59.875 real 0m16.407s 00:10:59.875 user 0m30.418s 00:10:59.875 sys 0m5.086s 00:10:59.875 19:03:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:59.875 19:03:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.875 ************************************ 00:10:59.875 END TEST nvmf_delete_subsystem 00:10:59.875 ************************************ 00:11:00.135 19:03:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:00.135 19:03:02 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:00.135 19:03:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:00.135 19:03:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.135 19:03:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:00.135 ************************************ 00:11:00.135 START TEST nvmf_ns_masking 00:11:00.135 ************************************ 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:00.135 * Looking for test storage... 00:11:00.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.135 19:03:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=96250f0d-27fa-4b50-8b88-3a5422e6aa7f 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3c6afca7-a2f9-4a23-ab04-ca72044a78cb 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=42f81333-7b02-452d-9f2c-546df9619ee8 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:00.136 19:03:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:06.720 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:06.720 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:06.720 Found net devices under 0000:86:00.0: cvl_0_0 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:06.720 Found net devices under 0000:86:00.1: cvl_0_1 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.720 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:06.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:11:06.721 00:11:06.721 --- 10.0.0.2 ping statistics --- 00:11:06.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.721 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:11:06.721 00:11:06.721 --- 10.0.0.1 ping statistics --- 00:11:06.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.721 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=216997 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 216997 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 216997 ']' 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.721 19:03:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:06.721 [2024-07-12 19:03:08.440557] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:11:06.721 [2024-07-12 19:03:08.440598] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.721 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.721 [2024-07-12 19:03:08.512106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.721 [2024-07-12 19:03:08.590724] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.721 [2024-07-12 19:03:08.590756] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.721 [2024-07-12 19:03:08.590764] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.721 [2024-07-12 19:03:08.590770] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.721 [2024-07-12 19:03:08.590775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.721 [2024-07-12 19:03:08.590791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.721 19:03:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.721 19:03:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:06.721 19:03:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:06.721 19:03:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:06.721 19:03:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:06.721 19:03:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.721 19:03:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:06.980 [2024-07-12 19:03:09.438512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.980 19:03:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:06.980 19:03:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:06.980 19:03:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:07.238 Malloc1 00:11:07.238 19:03:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:07.503 Malloc2 00:11:07.503 19:03:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:07.503 19:03:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:07.765 19:03:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.024 [2024-07-12 19:03:10.363627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.024 19:03:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:08.024 19:03:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 42f81333-7b02-452d-9f2c-546df9619ee8 -a 10.0.0.2 -s 4420 -i 4 00:11:08.024 19:03:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:08.024 19:03:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:08.024 19:03:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.024 19:03:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:08.024 19:03:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:10.563 [ 0]:0x1 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a4569fd0d2d4af6b4064cb1c6948d23 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a4569fd0d2d4af6b4064cb1c6948d23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:10.563 [ 0]:0x1 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a4569fd0d2d4af6b4064cb1c6948d23 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a4569fd0d2d4af6b4064cb1c6948d23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:10.563 [ 1]:0x2 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ad81c4f974104d7487def114129e32a7 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ad81c4f974104d7487def114129e32a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:10.563 19:03:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.826 19:03:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.826 19:03:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:11.086 19:03:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:11.086 19:03:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 42f81333-7b02-452d-9f2c-546df9619ee8 -a 10.0.0.2 -s 4420 -i 4 00:11:11.345 19:03:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:11.345 19:03:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:11.345 19:03:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.345 19:03:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:11.345 19:03:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:11.345 19:03:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:13.252 [ 0]:0x2 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:13.252 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:13.511 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ad81c4f974104d7487def114129e32a7 00:11:13.511 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ad81c4f974104d7487def114129e32a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:13.511 19:03:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:13.511 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:13.511 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:13.511 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:13.511 [ 0]:0x1 00:11:13.511 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:13.511 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:13.511 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a4569fd0d2d4af6b4064cb1c6948d23 00:11:13.511 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a4569fd0d2d4af6b4064cb1c6948d23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:13.511 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:13.511 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:13.511 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:13.511 [ 1]:0x2 00:11:13.511 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:13.511 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ad81c4f974104d7487def114129e32a7 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ad81c4f974104d7487def114129e32a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:13.771 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:14.031 [ 0]:0x2 00:11:14.031 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:14.031 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:14.031 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ad81c4f974104d7487def114129e32a7 00:11:14.031 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ad81c4f974104d7487def114129e32a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:14.031 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:14.031 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.031 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:14.291 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:14.291 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 42f81333-7b02-452d-9f2c-546df9619ee8 -a 10.0.0.2 -s 4420 -i 4 00:11:14.291 19:03:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:14.291 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:14.291 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.291 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:14.291 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:14.291 19:03:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:16.829 [ 0]:0x1 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a4569fd0d2d4af6b4064cb1c6948d23 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a4569fd0d2d4af6b4064cb1c6948d23 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:16.829 [ 1]:0x2 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ad81c4f974104d7487def114129e32a7 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ad81c4f974104d7487def114129e32a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.829 19:03:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:16.829 [ 0]:0x2 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ad81c4f974104d7487def114129e32a7 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ad81c4f974104d7487def114129e32a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:16.829 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.830 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:16.830 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.830 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:16.830 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.830 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:16.830 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.830 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:16.830 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:17.089 [2024-07-12 19:03:19.417363] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:17.089 request: 00:11:17.089 { 00:11:17.089 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.089 "nsid": 2, 00:11:17.089 "host": "nqn.2016-06.io.spdk:host1", 00:11:17.089 "method": "nvmf_ns_remove_host", 00:11:17.089 "req_id": 1 00:11:17.089 } 00:11:17.089 Got JSON-RPC error response 00:11:17.089 response: 00:11:17.089 { 00:11:17.089 "code": -32602, 00:11:17.089 "message": "Invalid parameters" 00:11:17.089 } 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:17.089 [ 0]:0x2 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ad81c4f974104d7487def114129e32a7 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ad81c4f974104d7487def114129e32a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=219264 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 219264 /var/tmp/host.sock 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 219264 ']' 00:11:17.089 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:17.090 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:17.090 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:17.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:17.090 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:17.090 19:03:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:17.090 [2024-07-12 19:03:19.656890] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:11:17.090 [2024-07-12 19:03:19.656935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid219264 ] 00:11:17.349 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.349 [2024-07-12 19:03:19.724135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.349 [2024-07-12 19:03:19.797466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.918 19:03:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.918 19:03:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:17.918 19:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.178 19:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:18.438 19:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 96250f0d-27fa-4b50-8b88-3a5422e6aa7f 00:11:18.438 19:03:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:18.438 19:03:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 96250F0D27FA4B508B883A5422E6AA7F -i 00:11:18.697 19:03:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3c6afca7-a2f9-4a23-ab04-ca72044a78cb 00:11:18.698 19:03:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:18.698 19:03:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3C6AFCA7A2F94A23AB04CA72044A78CB -i 00:11:18.698 19:03:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:18.957 19:03:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:19.217 19:03:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:19.217 19:03:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:19.476 nvme0n1 00:11:19.476 19:03:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:19.476 19:03:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:19.737 nvme1n2 00:11:19.737 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:19.737 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:19.737 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:19.737 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:19.737 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:19.996 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:19.996 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:19.996 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:19.996 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:20.256 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 96250f0d-27fa-4b50-8b88-3a5422e6aa7f == \9\6\2\5\0\f\0\d\-\2\7\f\a\-\4\b\5\0\-\8\b\8\8\-\3\a\5\4\2\2\e\6\a\a\7\f ]] 00:11:20.256 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:20.256 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:20.256 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:20.256 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3c6afca7-a2f9-4a23-ab04-ca72044a78cb == \3\c\6\a\f\c\a\7\-\a\2\f\9\-\4\a\2\3\-\a\b\0\4\-\c\a\7\2\0\4\4\a\7\8\c\b ]] 00:11:20.256 19:03:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 219264 00:11:20.256 19:03:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 219264 ']' 00:11:20.256 19:03:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 219264 00:11:20.256 19:03:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:20.256 19:03:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:20.256 19:03:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 219264 00:11:20.514 19:03:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:20.514 19:03:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:20.514 19:03:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 219264' 00:11:20.514 killing process with pid 219264 00:11:20.514 19:03:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 219264 00:11:20.514 19:03:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 219264 00:11:20.773 19:03:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:21.032 rmmod nvme_tcp 00:11:21.032 rmmod nvme_fabrics 00:11:21.032 rmmod nvme_keyring 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 216997 ']' 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 216997 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 216997 ']' 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 216997 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 216997 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 216997' 00:11:21.032 killing process with pid 216997 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 216997 00:11:21.032 19:03:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 216997 00:11:21.292 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:21.292 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:21.292 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:21.293 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:21.293 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:21.293 19:03:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.293 19:03:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.293 19:03:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.202 19:03:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:23.202 00:11:23.202 real 0m23.269s 00:11:23.202 user 0m25.020s 00:11:23.202 sys 0m6.501s 00:11:23.202 19:03:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:23.202 19:03:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:23.202 ************************************ 00:11:23.202 END TEST nvmf_ns_masking 00:11:23.202 ************************************ 00:11:23.462 19:03:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:23.462 19:03:25 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:23.462 19:03:25 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:23.462 19:03:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:23.462 19:03:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.462 19:03:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:23.462 ************************************ 00:11:23.462 START TEST nvmf_nvme_cli 00:11:23.462 ************************************ 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:23.462 * Looking for test storage... 00:11:23.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:23.462 19:03:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:30.036 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:30.036 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:30.036 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:30.037 Found net devices under 0000:86:00.0: cvl_0_0 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:30.037 Found net devices under 0000:86:00.1: cvl_0_1 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:30.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:11:30.037 00:11:30.037 --- 10.0.0.2 ping statistics --- 00:11:30.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.037 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:11:30.037 00:11:30.037 --- 10.0.0.1 ping statistics --- 00:11:30.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.037 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=223320 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 223320 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 223320 ']' 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:30.037 19:03:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.037 [2024-07-12 19:03:31.739878] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:11:30.037 [2024-07-12 19:03:31.739927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.037 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.037 [2024-07-12 19:03:31.810221] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.037 [2024-07-12 19:03:31.891145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.037 [2024-07-12 19:03:31.891184] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.037 [2024-07-12 19:03:31.891192] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.037 [2024-07-12 19:03:31.891198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.037 [2024-07-12 19:03:31.891204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.037 [2024-07-12 19:03:31.891277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.037 [2024-07-12 19:03:31.891316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.037 [2024-07-12 19:03:31.891341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.037 [2024-07-12 19:03:31.891342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.037 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:30.037 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:30.037 19:03:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:30.037 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:30.037 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.037 19:03:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.037 19:03:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.037 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.037 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.037 [2024-07-12 19:03:32.594176] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.037 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.037 19:03:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:30.037 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.037 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.297 Malloc0 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.298 Malloc1 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.298 [2024-07-12 19:03:32.675849] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:30.298 00:11:30.298 Discovery Log Number of Records 2, Generation counter 2 00:11:30.298 =====Discovery Log Entry 0====== 00:11:30.298 trtype: tcp 00:11:30.298 adrfam: ipv4 00:11:30.298 subtype: current discovery subsystem 00:11:30.298 treq: not required 00:11:30.298 portid: 0 00:11:30.298 trsvcid: 4420 00:11:30.298 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:30.298 traddr: 10.0.0.2 00:11:30.298 eflags: explicit discovery connections, duplicate discovery information 00:11:30.298 sectype: none 00:11:30.298 =====Discovery Log Entry 1====== 00:11:30.298 trtype: tcp 00:11:30.298 adrfam: ipv4 00:11:30.298 subtype: nvme subsystem 00:11:30.298 treq: not required 00:11:30.298 portid: 0 00:11:30.298 trsvcid: 4420 00:11:30.298 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:30.298 traddr: 10.0.0.2 00:11:30.298 eflags: none 00:11:30.298 sectype: none 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:30.298 19:03:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:31.676 19:03:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:31.676 19:03:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:31.676 19:03:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.676 19:03:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:31.676 19:03:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:31.676 19:03:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:33.582 19:03:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:33.582 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:33.582 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.582 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:33.582 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.582 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:33.582 19:03:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:33.582 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:33.582 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.582 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:33.842 /dev/nvme0n1 ]] 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:33.842 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.843 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:33.843 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:33.843 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.843 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:33.843 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.843 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:33.843 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:33.843 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.843 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:33.843 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:33.843 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.843 19:03:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:33.843 19:03:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:34.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:34.102 rmmod nvme_tcp 00:11:34.102 rmmod nvme_fabrics 00:11:34.102 rmmod nvme_keyring 00:11:34.102 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.361 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:34.361 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:34.361 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 223320 ']' 00:11:34.361 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 223320 00:11:34.361 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 223320 ']' 00:11:34.361 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 223320 00:11:34.361 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:34.361 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:34.361 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 223320 00:11:34.361 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:34.361 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:34.361 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 223320' 00:11:34.361 killing process with pid 223320 00:11:34.361 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 223320 00:11:34.361 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 223320 00:11:34.621 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.621 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:34.621 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:34.621 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:34.621 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:34.621 19:03:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.621 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.621 19:03:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.531 19:03:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:36.531 00:11:36.531 real 0m13.181s 00:11:36.531 user 0m21.706s 00:11:36.531 sys 0m4.968s 00:11:36.531 19:03:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.531 19:03:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:36.531 ************************************ 00:11:36.531 END TEST nvmf_nvme_cli 00:11:36.531 ************************************ 00:11:36.531 19:03:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:36.531 19:03:39 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:36.531 19:03:39 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:36.531 19:03:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:36.531 19:03:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.531 19:03:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:36.531 ************************************ 00:11:36.531 START TEST nvmf_vfio_user 00:11:36.531 ************************************ 00:11:36.531 19:03:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:36.791 * Looking for test storage... 00:11:36.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.791 19:03:39 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=224787 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 224787' 00:11:36.792 Process pid: 224787 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 224787 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 224787 ']' 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.792 19:03:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:36.792 [2024-07-12 19:03:39.261550] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:11:36.792 [2024-07-12 19:03:39.261594] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.792 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.792 [2024-07-12 19:03:39.328656] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.052 [2024-07-12 19:03:39.409291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.052 [2024-07-12 19:03:39.409326] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.052 [2024-07-12 19:03:39.409333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.052 [2024-07-12 19:03:39.409339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.052 [2024-07-12 19:03:39.409344] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.052 [2024-07-12 19:03:39.409391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.052 [2024-07-12 19:03:39.409425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.052 [2024-07-12 19:03:39.409529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.052 [2024-07-12 19:03:39.409530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.621 19:03:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:37.621 19:03:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:37.621 19:03:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:38.557 19:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:38.816 19:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:38.816 19:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:38.816 19:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:38.816 19:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:38.816 19:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:39.074 Malloc1 00:11:39.074 19:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:39.333 19:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:39.333 19:03:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:39.592 19:03:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:39.592 19:03:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:39.592 19:03:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:39.851 Malloc2 00:11:39.851 19:03:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:39.851 19:03:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:40.109 19:03:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:40.370 19:03:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:40.370 19:03:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:40.370 19:03:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:40.370 19:03:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:40.370 19:03:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:40.370 19:03:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:40.370 [2024-07-12 19:03:42.771374] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:11:40.370 [2024-07-12 19:03:42.771407] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid225280 ] 00:11:40.370 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.370 [2024-07-12 19:03:42.800748] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:40.370 [2024-07-12 19:03:42.803114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:40.370 [2024-07-12 19:03:42.803132] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb68032a000 00:11:40.370 [2024-07-12 19:03:42.804111] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:40.370 [2024-07-12 19:03:42.805108] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:40.370 [2024-07-12 19:03:42.806119] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:40.370 [2024-07-12 19:03:42.807123] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:40.370 [2024-07-12 19:03:42.808124] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:40.370 [2024-07-12 19:03:42.809133] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:40.370 [2024-07-12 19:03:42.810143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:40.370 [2024-07-12 19:03:42.811149] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:40.371 [2024-07-12 19:03:42.812157] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:40.371 [2024-07-12 19:03:42.812166] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb68031f000 00:11:40.371 [2024-07-12 19:03:42.813113] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:40.371 [2024-07-12 19:03:42.826732] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:40.371 [2024-07-12 19:03:42.826758] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:40.371 [2024-07-12 19:03:42.829282] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:40.371 [2024-07-12 19:03:42.829318] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:40.371 [2024-07-12 19:03:42.829386] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:40.371 [2024-07-12 19:03:42.829405] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:40.371 [2024-07-12 19:03:42.829410] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:40.371 [2024-07-12 19:03:42.830281] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:40.371 [2024-07-12 19:03:42.830291] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:40.371 [2024-07-12 19:03:42.830297] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:40.371 [2024-07-12 19:03:42.831289] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:40.371 [2024-07-12 19:03:42.831297] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:40.371 [2024-07-12 19:03:42.831304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:40.371 [2024-07-12 19:03:42.832298] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:40.371 [2024-07-12 19:03:42.832307] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:40.371 [2024-07-12 19:03:42.833300] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:40.371 [2024-07-12 19:03:42.833308] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:40.371 [2024-07-12 19:03:42.833312] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:40.371 [2024-07-12 19:03:42.833318] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:40.371 [2024-07-12 19:03:42.833423] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:40.371 [2024-07-12 19:03:42.833428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:40.371 [2024-07-12 19:03:42.833432] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:40.371 [2024-07-12 19:03:42.834310] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:40.371 [2024-07-12 19:03:42.835316] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:40.371 [2024-07-12 19:03:42.836322] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:40.371 [2024-07-12 19:03:42.837321] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:40.371 [2024-07-12 19:03:42.837385] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:40.371 [2024-07-12 19:03:42.838334] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:40.371 [2024-07-12 19:03:42.838342] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:40.371 [2024-07-12 19:03:42.838347] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838363] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:40.371 [2024-07-12 19:03:42.838374] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838389] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:40.371 [2024-07-12 19:03:42.838393] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:40.371 [2024-07-12 19:03:42.838405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:40.371 [2024-07-12 19:03:42.838447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:40.371 [2024-07-12 19:03:42.838455] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:40.371 [2024-07-12 19:03:42.838462] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:40.371 [2024-07-12 19:03:42.838466] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:40.371 [2024-07-12 19:03:42.838470] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:40.371 [2024-07-12 19:03:42.838474] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:40.371 [2024-07-12 19:03:42.838478] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:40.371 [2024-07-12 19:03:42.838482] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838489] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:40.371 [2024-07-12 19:03:42.838513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:40.371 [2024-07-12 19:03:42.838526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.371 [2024-07-12 19:03:42.838534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.371 [2024-07-12 19:03:42.838541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.371 [2024-07-12 19:03:42.838550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.371 [2024-07-12 19:03:42.838555] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838571] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:40.371 [2024-07-12 19:03:42.838580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:40.371 [2024-07-12 19:03:42.838585] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:40.371 [2024-07-12 19:03:42.838590] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838601] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838608] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:40.371 [2024-07-12 19:03:42.838619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:40.371 [2024-07-12 19:03:42.838667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838681] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:40.371 [2024-07-12 19:03:42.838685] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:40.371 [2024-07-12 19:03:42.838691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:40.371 [2024-07-12 19:03:42.838708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:40.371 [2024-07-12 19:03:42.838717] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:40.371 [2024-07-12 19:03:42.838724] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838731] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838737] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:40.371 [2024-07-12 19:03:42.838741] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:40.371 [2024-07-12 19:03:42.838746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:40.371 [2024-07-12 19:03:42.838764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:40.371 [2024-07-12 19:03:42.838776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838783] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:40.371 [2024-07-12 19:03:42.838791] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:40.371 [2024-07-12 19:03:42.838795] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:40.371 [2024-07-12 19:03:42.838800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:40.371 [2024-07-12 19:03:42.838815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:40.372 [2024-07-12 19:03:42.838822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:40.372 [2024-07-12 19:03:42.838828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:40.372 [2024-07-12 19:03:42.838834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:40.372 [2024-07-12 19:03:42.838840] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:40.372 [2024-07-12 19:03:42.838844] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:40.372 [2024-07-12 19:03:42.838849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:40.372 [2024-07-12 19:03:42.838853] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:40.372 [2024-07-12 19:03:42.838857] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:40.372 [2024-07-12 19:03:42.838862] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:40.372 [2024-07-12 19:03:42.838877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:40.372 [2024-07-12 19:03:42.838889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:40.372 [2024-07-12 19:03:42.838899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:40.372 [2024-07-12 19:03:42.838909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:40.372 [2024-07-12 19:03:42.838918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:40.372 [2024-07-12 19:03:42.838931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:40.372 [2024-07-12 19:03:42.838941] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:40.372 [2024-07-12 19:03:42.838951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:40.372 [2024-07-12 19:03:42.838962] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:40.372 [2024-07-12 19:03:42.838967] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:40.372 [2024-07-12 19:03:42.838970] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:40.372 [2024-07-12 19:03:42.838973] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:40.372 [2024-07-12 19:03:42.838978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:40.372 [2024-07-12 19:03:42.838986] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:40.372 [2024-07-12 19:03:42.838990] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:40.372 [2024-07-12 19:03:42.838995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:40.372 [2024-07-12 19:03:42.839002] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:40.372 [2024-07-12 19:03:42.839005] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:40.372 [2024-07-12 19:03:42.839010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:40.372 [2024-07-12 19:03:42.839017] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:40.372 [2024-07-12 19:03:42.839021] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:40.372 [2024-07-12 19:03:42.839026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:40.372 [2024-07-12 19:03:42.839033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:40.372 [2024-07-12 19:03:42.839043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:40.372 [2024-07-12 19:03:42.839053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:40.372 [2024-07-12 19:03:42.839059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:40.372 ===================================================== 00:11:40.372 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:40.372 ===================================================== 00:11:40.372 Controller Capabilities/Features 00:11:40.372 ================================ 00:11:40.372 Vendor ID: 4e58 00:11:40.372 Subsystem Vendor ID: 4e58 00:11:40.372 Serial Number: SPDK1 00:11:40.372 Model Number: SPDK bdev Controller 00:11:40.372 Firmware Version: 24.09 00:11:40.372 Recommended Arb Burst: 6 00:11:40.372 IEEE OUI Identifier: 8d 6b 50 00:11:40.372 Multi-path I/O 00:11:40.372 May have multiple subsystem ports: Yes 00:11:40.372 May have multiple controllers: Yes 00:11:40.372 Associated with SR-IOV VF: No 00:11:40.372 Max Data Transfer Size: 131072 00:11:40.372 Max Number of Namespaces: 32 00:11:40.372 Max Number of I/O Queues: 127 00:11:40.372 NVMe Specification Version (VS): 1.3 00:11:40.372 NVMe Specification Version (Identify): 1.3 00:11:40.372 Maximum Queue Entries: 256 00:11:40.372 Contiguous Queues Required: Yes 00:11:40.372 Arbitration Mechanisms Supported 00:11:40.372 Weighted Round Robin: Not Supported 00:11:40.372 Vendor Specific: Not Supported 00:11:40.372 Reset Timeout: 15000 ms 00:11:40.372 Doorbell Stride: 4 bytes 00:11:40.372 NVM Subsystem Reset: Not Supported 00:11:40.372 Command Sets Supported 00:11:40.372 NVM Command Set: Supported 00:11:40.372 Boot Partition: Not Supported 00:11:40.372 Memory Page Size Minimum: 4096 bytes 00:11:40.372 Memory Page Size Maximum: 4096 bytes 00:11:40.372 Persistent Memory Region: Not Supported 00:11:40.372 Optional Asynchronous Events Supported 00:11:40.372 Namespace Attribute Notices: Supported 00:11:40.372 Firmware Activation Notices: Not Supported 00:11:40.372 ANA Change Notices: Not Supported 00:11:40.372 PLE Aggregate Log Change Notices: Not Supported 00:11:40.372 LBA Status Info Alert Notices: Not Supported 00:11:40.372 EGE Aggregate Log Change Notices: Not Supported 00:11:40.372 Normal NVM Subsystem Shutdown event: Not Supported 00:11:40.372 Zone Descriptor Change Notices: Not Supported 00:11:40.372 Discovery Log Change Notices: Not Supported 00:11:40.372 Controller Attributes 00:11:40.372 128-bit Host Identifier: Supported 00:11:40.372 Non-Operational Permissive Mode: Not Supported 00:11:40.372 NVM Sets: Not Supported 00:11:40.372 Read Recovery Levels: Not Supported 00:11:40.372 Endurance Groups: Not Supported 00:11:40.372 Predictable Latency Mode: Not Supported 00:11:40.372 Traffic Based Keep ALive: Not Supported 00:11:40.372 Namespace Granularity: Not Supported 00:11:40.372 SQ Associations: Not Supported 00:11:40.372 UUID List: Not Supported 00:11:40.372 Multi-Domain Subsystem: Not Supported 00:11:40.372 Fixed Capacity Management: Not Supported 00:11:40.372 Variable Capacity Management: Not Supported 00:11:40.372 Delete Endurance Group: Not Supported 00:11:40.372 Delete NVM Set: Not Supported 00:11:40.372 Extended LBA Formats Supported: Not Supported 00:11:40.372 Flexible Data Placement Supported: Not Supported 00:11:40.372 00:11:40.372 Controller Memory Buffer Support 00:11:40.372 ================================ 00:11:40.373 Supported: No 00:11:40.373 00:11:40.373 Persistent Memory Region Support 00:11:40.373 ================================ 00:11:40.373 Supported: No 00:11:40.373 00:11:40.373 Admin Command Set Attributes 00:11:40.373 ============================ 00:11:40.373 Security Send/Receive: Not Supported 00:11:40.373 Format NVM: Not Supported 00:11:40.373 Firmware Activate/Download: Not Supported 00:11:40.373 Namespace Management: Not Supported 00:11:40.373 Device Self-Test: Not Supported 00:11:40.373 Directives: Not Supported 00:11:40.373 NVMe-MI: Not Supported 00:11:40.373 Virtualization Management: Not Supported 00:11:40.373 Doorbell Buffer Config: Not Supported 00:11:40.373 Get LBA Status Capability: Not Supported 00:11:40.373 Command & Feature Lockdown Capability: Not Supported 00:11:40.373 Abort Command Limit: 4 00:11:40.373 Async Event Request Limit: 4 00:11:40.373 Number of Firmware Slots: N/A 00:11:40.373 Firmware Slot 1 Read-Only: N/A 00:11:40.373 Firmware Activation Without Reset: N/A 00:11:40.373 Multiple Update Detection Support: N/A 00:11:40.373 Firmware Update Granularity: No Information Provided 00:11:40.373 Per-Namespace SMART Log: No 00:11:40.373 Asymmetric Namespace Access Log Page: Not Supported 00:11:40.373 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:40.373 Command Effects Log Page: Supported 00:11:40.373 Get Log Page Extended Data: Supported 00:11:40.373 Telemetry Log Pages: Not Supported 00:11:40.373 Persistent Event Log Pages: Not Supported 00:11:40.373 Supported Log Pages Log Page: May Support 00:11:40.373 Commands Supported & Effects Log Page: Not Supported 00:11:40.373 Feature Identifiers & Effects Log Page:May Support 00:11:40.373 NVMe-MI Commands & Effects Log Page: May Support 00:11:40.373 Data Area 4 for Telemetry Log: Not Supported 00:11:40.373 Error Log Page Entries Supported: 128 00:11:40.373 Keep Alive: Supported 00:11:40.373 Keep Alive Granularity: 10000 ms 00:11:40.373 00:11:40.373 NVM Command Set Attributes 00:11:40.373 ========================== 00:11:40.373 Submission Queue Entry Size 00:11:40.373 Max: 64 00:11:40.373 Min: 64 00:11:40.373 Completion Queue Entry Size 00:11:40.373 Max: 16 00:11:40.373 Min: 16 00:11:40.373 Number of Namespaces: 32 00:11:40.373 Compare Command: Supported 00:11:40.373 Write Uncorrectable Command: Not Supported 00:11:40.373 Dataset Management Command: Supported 00:11:40.373 Write Zeroes Command: Supported 00:11:40.373 Set Features Save Field: Not Supported 00:11:40.373 Reservations: Not Supported 00:11:40.373 Timestamp: Not Supported 00:11:40.373 Copy: Supported 00:11:40.373 Volatile Write Cache: Present 00:11:40.373 Atomic Write Unit (Normal): 1 00:11:40.373 Atomic Write Unit (PFail): 1 00:11:40.373 Atomic Compare & Write Unit: 1 00:11:40.373 Fused Compare & Write: Supported 00:11:40.373 Scatter-Gather List 00:11:40.373 SGL Command Set: Supported (Dword aligned) 00:11:40.373 SGL Keyed: Not Supported 00:11:40.373 SGL Bit Bucket Descriptor: Not Supported 00:11:40.373 SGL Metadata Pointer: Not Supported 00:11:40.373 Oversized SGL: Not Supported 00:11:40.373 SGL Metadata Address: Not Supported 00:11:40.373 SGL Offset: Not Supported 00:11:40.373 Transport SGL Data Block: Not Supported 00:11:40.373 Replay Protected Memory Block: Not Supported 00:11:40.373 00:11:40.373 Firmware Slot Information 00:11:40.373 ========================= 00:11:40.373 Active slot: 1 00:11:40.373 Slot 1 Firmware Revision: 24.09 00:11:40.373 00:11:40.373 00:11:40.373 Commands Supported and Effects 00:11:40.373 ============================== 00:11:40.373 Admin Commands 00:11:40.373 -------------- 00:11:40.373 Get Log Page (02h): Supported 00:11:40.373 Identify (06h): Supported 00:11:40.373 Abort (08h): Supported 00:11:40.373 Set Features (09h): Supported 00:11:40.373 Get Features (0Ah): Supported 00:11:40.373 Asynchronous Event Request (0Ch): Supported 00:11:40.373 Keep Alive (18h): Supported 00:11:40.373 I/O Commands 00:11:40.373 ------------ 00:11:40.373 Flush (00h): Supported LBA-Change 00:11:40.373 Write (01h): Supported LBA-Change 00:11:40.373 Read (02h): Supported 00:11:40.373 Compare (05h): Supported 00:11:40.373 Write Zeroes (08h): Supported LBA-Change 00:11:40.373 Dataset Management (09h): Supported LBA-Change 00:11:40.373 Copy (19h): Supported LBA-Change 00:11:40.373 00:11:40.373 Error Log 00:11:40.373 ========= 00:11:40.373 00:11:40.373 Arbitration 00:11:40.373 =========== 00:11:40.373 Arbitration Burst: 1 00:11:40.373 00:11:40.373 Power Management 00:11:40.373 ================ 00:11:40.373 Number of Power States: 1 00:11:40.373 Current Power State: Power State #0 00:11:40.373 Power State #0: 00:11:40.373 Max Power: 0.00 W 00:11:40.373 Non-Operational State: Operational 00:11:40.373 Entry Latency: Not Reported 00:11:40.373 Exit Latency: Not Reported 00:11:40.373 Relative Read Throughput: 0 00:11:40.373 Relative Read Latency: 0 00:11:40.373 Relative Write Throughput: 0 00:11:40.373 Relative Write Latency: 0 00:11:40.373 Idle Power: Not Reported 00:11:40.373 Active Power: Not Reported 00:11:40.373 Non-Operational Permissive Mode: Not Supported 00:11:40.373 00:11:40.373 Health Information 00:11:40.373 ================== 00:11:40.373 Critical Warnings: 00:11:40.373 Available Spare Space: OK 00:11:40.373 Temperature: OK 00:11:40.373 Device Reliability: OK 00:11:40.373 Read Only: No 00:11:40.373 Volatile Memory Backup: OK 00:11:40.373 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:40.373 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:40.373 Available Spare: 0% 00:11:40.373 Available Sp[2024-07-12 19:03:42.839151] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:40.373 [2024-07-12 19:03:42.839162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:40.373 [2024-07-12 19:03:42.839191] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:40.373 [2024-07-12 19:03:42.839200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.373 [2024-07-12 19:03:42.839205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.373 [2024-07-12 19:03:42.839210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.373 [2024-07-12 19:03:42.839216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.373 [2024-07-12 19:03:42.842234] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:40.373 [2024-07-12 19:03:42.842249] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:40.373 [2024-07-12 19:03:42.842349] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:40.373 [2024-07-12 19:03:42.842400] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:40.373 [2024-07-12 19:03:42.842406] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:40.373 [2024-07-12 19:03:42.843361] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:40.373 [2024-07-12 19:03:42.843374] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:40.373 [2024-07-12 19:03:42.843424] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:40.373 [2024-07-12 19:03:42.845392] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:40.373 are Threshold: 0% 00:11:40.373 Life Percentage Used: 0% 00:11:40.373 Data Units Read: 0 00:11:40.373 Data Units Written: 0 00:11:40.373 Host Read Commands: 0 00:11:40.373 Host Write Commands: 0 00:11:40.373 Controller Busy Time: 0 minutes 00:11:40.373 Power Cycles: 0 00:11:40.373 Power On Hours: 0 hours 00:11:40.373 Unsafe Shutdowns: 0 00:11:40.373 Unrecoverable Media Errors: 0 00:11:40.373 Lifetime Error Log Entries: 0 00:11:40.373 Warning Temperature Time: 0 minutes 00:11:40.373 Critical Temperature Time: 0 minutes 00:11:40.373 00:11:40.373 Number of Queues 00:11:40.373 ================ 00:11:40.373 Number of I/O Submission Queues: 127 00:11:40.373 Number of I/O Completion Queues: 127 00:11:40.373 00:11:40.373 Active Namespaces 00:11:40.373 ================= 00:11:40.373 Namespace ID:1 00:11:40.373 Error Recovery Timeout: Unlimited 00:11:40.373 Command Set Identifier: NVM (00h) 00:11:40.373 Deallocate: Supported 00:11:40.373 Deallocated/Unwritten Error: Not Supported 00:11:40.373 Deallocated Read Value: Unknown 00:11:40.373 Deallocate in Write Zeroes: Not Supported 00:11:40.373 Deallocated Guard Field: 0xFFFF 00:11:40.373 Flush: Supported 00:11:40.373 Reservation: Supported 00:11:40.373 Namespace Sharing Capabilities: Multiple Controllers 00:11:40.374 Size (in LBAs): 131072 (0GiB) 00:11:40.374 Capacity (in LBAs): 131072 (0GiB) 00:11:40.374 Utilization (in LBAs): 131072 (0GiB) 00:11:40.374 NGUID: AF079C778F694C409A02FBCBBB677F9A 00:11:40.374 UUID: af079c77-8f69-4c40-9a02-fbcbbb677f9a 00:11:40.374 Thin Provisioning: Not Supported 00:11:40.374 Per-NS Atomic Units: Yes 00:11:40.374 Atomic Boundary Size (Normal): 0 00:11:40.374 Atomic Boundary Size (PFail): 0 00:11:40.374 Atomic Boundary Offset: 0 00:11:40.374 Maximum Single Source Range Length: 65535 00:11:40.374 Maximum Copy Length: 65535 00:11:40.374 Maximum Source Range Count: 1 00:11:40.374 NGUID/EUI64 Never Reused: No 00:11:40.374 Namespace Write Protected: No 00:11:40.374 Number of LBA Formats: 1 00:11:40.374 Current LBA Format: LBA Format #00 00:11:40.374 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:40.374 00:11:40.374 19:03:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:40.374 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.633 [2024-07-12 19:03:43.059971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:45.907 Initializing NVMe Controllers 00:11:45.907 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:45.907 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:45.907 Initialization complete. Launching workers. 00:11:45.907 ======================================================== 00:11:45.907 Latency(us) 00:11:45.907 Device Information : IOPS MiB/s Average min max 00:11:45.907 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39855.55 155.69 3211.18 955.94 10611.25 00:11:45.907 ======================================================== 00:11:45.907 Total : 39855.55 155.69 3211.18 955.94 10611.25 00:11:45.907 00:11:45.907 [2024-07-12 19:03:48.080895] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:45.907 19:03:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:45.907 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.907 [2024-07-12 19:03:48.304891] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:51.186 Initializing NVMe Controllers 00:11:51.186 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:51.186 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:51.186 Initialization complete. Launching workers. 00:11:51.186 ======================================================== 00:11:51.186 Latency(us) 00:11:51.186 Device Information : IOPS MiB/s Average min max 00:11:51.186 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.63 62.71 7978.11 6634.56 8978.57 00:11:51.186 ======================================================== 00:11:51.186 Total : 16054.63 62.71 7978.11 6634.56 8978.57 00:11:51.186 00:11:51.186 [2024-07-12 19:03:53.345656] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:51.186 19:03:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:51.186 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.186 [2024-07-12 19:03:53.544577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:56.465 [2024-07-12 19:03:58.624533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:56.465 Initializing NVMe Controllers 00:11:56.465 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:56.465 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:56.465 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:56.465 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:56.465 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:56.465 Initialization complete. Launching workers. 00:11:56.465 Starting thread on core 2 00:11:56.465 Starting thread on core 3 00:11:56.465 Starting thread on core 1 00:11:56.465 19:03:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:56.465 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.465 [2024-07-12 19:03:58.909641] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:59.760 [2024-07-12 19:04:01.974349] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:59.760 Initializing NVMe Controllers 00:11:59.760 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:59.760 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:59.760 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:59.760 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:59.760 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:59.760 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:59.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:59.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:59.760 Initialization complete. Launching workers. 00:11:59.760 Starting thread on core 1 with urgent priority queue 00:11:59.760 Starting thread on core 2 with urgent priority queue 00:11:59.760 Starting thread on core 3 with urgent priority queue 00:11:59.760 Starting thread on core 0 with urgent priority queue 00:11:59.760 SPDK bdev Controller (SPDK1 ) core 0: 6816.33 IO/s 14.67 secs/100000 ios 00:11:59.760 SPDK bdev Controller (SPDK1 ) core 1: 8514.67 IO/s 11.74 secs/100000 ios 00:11:59.760 SPDK bdev Controller (SPDK1 ) core 2: 7106.00 IO/s 14.07 secs/100000 ios 00:11:59.760 SPDK bdev Controller (SPDK1 ) core 3: 6630.00 IO/s 15.08 secs/100000 ios 00:11:59.760 ======================================================== 00:11:59.760 00:11:59.760 19:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:59.760 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.760 [2024-07-12 19:04:02.245153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:59.760 Initializing NVMe Controllers 00:11:59.760 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:59.760 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:59.760 Namespace ID: 1 size: 0GB 00:11:59.760 Initialization complete. 00:11:59.760 INFO: using host memory buffer for IO 00:11:59.760 Hello world! 00:11:59.760 [2024-07-12 19:04:02.280388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:59.760 19:04:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:00.018 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.018 [2024-07-12 19:04:02.553661] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:01.393 Initializing NVMe Controllers 00:12:01.393 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:01.393 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:01.393 Initialization complete. Launching workers. 00:12:01.393 submit (in ns) avg, min, max = 6755.1, 3232.2, 5991858.3 00:12:01.393 complete (in ns) avg, min, max = 21111.3, 1789.6, 3998701.7 00:12:01.393 00:12:01.393 Submit histogram 00:12:01.393 ================ 00:12:01.393 Range in us Cumulative Count 00:12:01.393 3.228 - 3.242: 0.0181% ( 3) 00:12:01.393 3.270 - 3.283: 0.0907% ( 12) 00:12:01.393 3.283 - 3.297: 0.3506% ( 43) 00:12:01.393 3.297 - 3.311: 0.9733% ( 103) 00:12:01.393 3.311 - 3.325: 1.9648% ( 164) 00:12:01.393 3.325 - 3.339: 3.2707% ( 216) 00:12:01.393 3.339 - 3.353: 5.8521% ( 427) 00:12:01.393 3.353 - 3.367: 10.5737% ( 781) 00:12:01.393 3.367 - 3.381: 15.9482% ( 889) 00:12:01.393 3.381 - 3.395: 21.7701% ( 963) 00:12:01.393 3.395 - 3.409: 28.1361% ( 1053) 00:12:01.393 3.409 - 3.423: 33.8613% ( 947) 00:12:01.393 3.423 - 3.437: 38.9759% ( 846) 00:12:01.393 3.437 - 3.450: 44.2537% ( 873) 00:12:01.393 3.450 - 3.464: 49.5436% ( 875) 00:12:01.393 3.464 - 3.478: 53.8299% ( 709) 00:12:01.393 3.478 - 3.492: 58.1767% ( 719) 00:12:01.393 3.492 - 3.506: 64.1860% ( 994) 00:12:01.393 3.506 - 3.520: 70.0260% ( 966) 00:12:01.393 3.520 - 3.534: 73.7501% ( 616) 00:12:01.393 3.534 - 3.548: 78.1331% ( 725) 00:12:01.393 3.548 - 3.562: 81.8753% ( 619) 00:12:01.393 3.562 - 3.590: 86.2644% ( 726) 00:12:01.393 3.590 - 3.617: 87.8181% ( 257) 00:12:01.393 3.617 - 3.645: 88.8036% ( 163) 00:12:01.393 3.645 - 3.673: 90.2364% ( 237) 00:12:01.393 3.673 - 3.701: 91.8445% ( 266) 00:12:01.393 3.701 - 3.729: 93.4768% ( 270) 00:12:01.393 3.729 - 3.757: 95.1514% ( 277) 00:12:01.393 3.757 - 3.784: 96.7414% ( 263) 00:12:01.393 3.784 - 3.812: 97.9929% ( 207) 00:12:01.393 3.812 - 3.840: 98.7727% ( 129) 00:12:01.393 3.840 - 3.868: 99.1415% ( 61) 00:12:01.393 3.868 - 3.896: 99.4015% ( 43) 00:12:01.393 3.896 - 3.923: 99.5889% ( 31) 00:12:01.393 3.923 - 3.951: 99.6433% ( 9) 00:12:01.393 3.951 - 3.979: 99.6614% ( 3) 00:12:01.394 5.064 - 5.092: 99.6675% ( 1) 00:12:01.394 5.343 - 5.370: 99.6735% ( 1) 00:12:01.394 5.398 - 5.426: 99.6917% ( 3) 00:12:01.394 5.426 - 5.454: 99.6977% ( 1) 00:12:01.394 5.482 - 5.510: 99.7159% ( 3) 00:12:01.394 5.565 - 5.593: 99.7219% ( 1) 00:12:01.394 5.593 - 5.621: 99.7279% ( 1) 00:12:01.394 5.621 - 5.649: 99.7340% ( 1) 00:12:01.394 5.732 - 5.760: 99.7400% ( 1) 00:12:01.394 5.843 - 5.871: 99.7521% ( 2) 00:12:01.394 5.899 - 5.927: 99.7582% ( 1) 00:12:01.394 6.010 - 6.038: 99.7642% ( 1) 00:12:01.394 6.038 - 6.066: 99.7703% ( 1) 00:12:01.394 6.094 - 6.122: 99.7763% ( 1) 00:12:01.394 6.150 - 6.177: 99.7824% ( 1) 00:12:01.394 6.317 - 6.344: 99.7884% ( 1) 00:12:01.394 6.372 - 6.400: 99.7945% ( 1) 00:12:01.394 6.428 - 6.456: 99.8005% ( 1) 00:12:01.394 6.483 - 6.511: 99.8126% ( 2) 00:12:01.394 6.567 - 6.595: 99.8186% ( 1) 00:12:01.394 6.873 - 6.901: 99.8247% ( 1) 00:12:01.394 7.012 - 7.040: 99.8307% ( 1) 00:12:01.394 7.123 - 7.179: 99.8428% ( 2) 00:12:01.394 7.235 - 7.290: 99.8489% ( 1) 00:12:01.394 7.402 - 7.457: 99.8549% ( 1) 00:12:01.394 7.513 - 7.569: 99.8610% ( 1) 00:12:01.394 7.569 - 7.624: 99.8670% ( 1) 00:12:01.394 7.903 - 7.958: 99.8851% ( 3) 00:12:01.394 8.181 - 8.237: 99.8912% ( 1) 00:12:01.394 8.626 - 8.682: 99.8972% ( 1) 00:12:01.394 8.904 - 8.960: 99.9033% ( 1) 00:12:01.394 9.127 - 9.183: 99.9093% ( 1) 00:12:01.394 10.797 - 10.852: 99.9154% ( 1) 00:12:01.394 16.584 - 16.696: 99.9214% ( 1) 00:12:01.394 3989.148 - 4017.642: 99.9940% ( 12) 00:12:01.394 5983.722 - 6012.216: 100.0000% ( 1) 00:12:01.394 00:12:01.394 Complete histogram 00:12:01.394 ================== 00:12:01.394 Range in us Cumulative Count 00:12:01.394 1.781 - 1.795: 0.0060% ( 1) 00:12:01.394 1.809 - 1.823: 0.1088% ( 17) 00:12:01.394 1.823 - 1.837: 0.8585% ( 124) 00:12:01.394 1.837 - 1.850: 2.1704% ( 217) 00:12:01.394 1.850 - [2024-07-12 19:04:03.573623] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:01.394 1.864: 3.4823% ( 217) 00:12:01.394 1.864 - 1.878: 29.2425% ( 4261) 00:12:01.394 1.878 - 1.892: 80.3156% ( 8448) 00:12:01.394 1.892 - 1.906: 92.6969% ( 2048) 00:12:01.394 1.906 - 1.920: 95.4295% ( 452) 00:12:01.394 1.920 - 1.934: 96.0704% ( 106) 00:12:01.394 1.934 - 1.948: 96.8140% ( 123) 00:12:01.394 1.948 - 1.962: 98.0291% ( 201) 00:12:01.394 1.962 - 1.976: 98.9723% ( 156) 00:12:01.394 1.976 - 1.990: 99.2020% ( 38) 00:12:01.394 1.990 - 2.003: 99.2624% ( 10) 00:12:01.394 2.003 - 2.017: 99.2866% ( 4) 00:12:01.394 2.017 - 2.031: 99.2987% ( 2) 00:12:01.394 2.045 - 2.059: 99.3048% ( 1) 00:12:01.394 2.073 - 2.087: 99.3108% ( 1) 00:12:01.394 2.101 - 2.115: 99.3168% ( 1) 00:12:01.394 2.769 - 2.783: 99.3229% ( 1) 00:12:01.394 3.673 - 3.701: 99.3289% ( 1) 00:12:01.394 3.701 - 3.729: 99.3350% ( 1) 00:12:01.394 3.729 - 3.757: 99.3410% ( 1) 00:12:01.394 3.896 - 3.923: 99.3471% ( 1) 00:12:01.394 3.923 - 3.951: 99.3531% ( 1) 00:12:01.394 3.951 - 3.979: 99.3592% ( 1) 00:12:01.394 3.979 - 4.007: 99.3652% ( 1) 00:12:01.394 4.035 - 4.063: 99.3713% ( 1) 00:12:01.394 4.174 - 4.202: 99.3773% ( 1) 00:12:01.394 4.202 - 4.230: 99.3834% ( 1) 00:12:01.394 4.424 - 4.452: 99.3894% ( 1) 00:12:01.394 4.925 - 4.953: 99.3954% ( 1) 00:12:01.394 5.287 - 5.315: 99.4015% ( 1) 00:12:01.394 5.482 - 5.510: 99.4075% ( 1) 00:12:01.394 5.677 - 5.704: 99.4136% ( 1) 00:12:01.394 5.732 - 5.760: 99.4196% ( 1) 00:12:01.394 5.788 - 5.816: 99.4257% ( 1) 00:12:01.394 5.955 - 5.983: 99.4317% ( 1) 00:12:01.394 5.983 - 6.010: 99.4378% ( 1) 00:12:01.394 6.066 - 6.094: 99.4438% ( 1) 00:12:01.394 6.122 - 6.150: 99.4499% ( 1) 00:12:01.394 6.205 - 6.233: 99.4559% ( 1) 00:12:01.394 6.372 - 6.400: 99.4619% ( 1) 00:12:01.394 6.790 - 6.817: 99.4680% ( 1) 00:12:01.394 6.957 - 6.984: 99.4740% ( 1) 00:12:01.394 7.457 - 7.513: 99.4801% ( 1) 00:12:01.394 7.736 - 7.791: 99.4861% ( 1) 00:12:01.394 7.791 - 7.847: 99.4922% ( 1) 00:12:01.394 8.070 - 8.125: 99.4982% ( 1) 00:12:01.394 8.181 - 8.237: 99.5043% ( 1) 00:12:01.394 9.071 - 9.127: 99.5103% ( 1) 00:12:01.394 154.045 - 154.936: 99.5164% ( 1) 00:12:01.394 2208.278 - 2222.525: 99.5224% ( 1) 00:12:01.394 3989.148 - 4017.642: 100.0000% ( 79) 00:12:01.394 00:12:01.394 19:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:01.394 19:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:01.394 19:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:01.394 19:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:01.394 19:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:01.394 [ 00:12:01.394 { 00:12:01.394 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:01.394 "subtype": "Discovery", 00:12:01.394 "listen_addresses": [], 00:12:01.394 "allow_any_host": true, 00:12:01.394 "hosts": [] 00:12:01.394 }, 00:12:01.394 { 00:12:01.394 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:01.394 "subtype": "NVMe", 00:12:01.394 "listen_addresses": [ 00:12:01.394 { 00:12:01.394 "trtype": "VFIOUSER", 00:12:01.394 "adrfam": "IPv4", 00:12:01.394 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:01.394 "trsvcid": "0" 00:12:01.394 } 00:12:01.394 ], 00:12:01.394 "allow_any_host": true, 00:12:01.394 "hosts": [], 00:12:01.394 "serial_number": "SPDK1", 00:12:01.394 "model_number": "SPDK bdev Controller", 00:12:01.394 "max_namespaces": 32, 00:12:01.394 "min_cntlid": 1, 00:12:01.394 "max_cntlid": 65519, 00:12:01.394 "namespaces": [ 00:12:01.394 { 00:12:01.394 "nsid": 1, 00:12:01.394 "bdev_name": "Malloc1", 00:12:01.394 "name": "Malloc1", 00:12:01.394 "nguid": "AF079C778F694C409A02FBCBBB677F9A", 00:12:01.394 "uuid": "af079c77-8f69-4c40-9a02-fbcbbb677f9a" 00:12:01.394 } 00:12:01.394 ] 00:12:01.394 }, 00:12:01.394 { 00:12:01.394 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:01.394 "subtype": "NVMe", 00:12:01.394 "listen_addresses": [ 00:12:01.394 { 00:12:01.394 "trtype": "VFIOUSER", 00:12:01.394 "adrfam": "IPv4", 00:12:01.394 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:01.394 "trsvcid": "0" 00:12:01.394 } 00:12:01.394 ], 00:12:01.394 "allow_any_host": true, 00:12:01.394 "hosts": [], 00:12:01.394 "serial_number": "SPDK2", 00:12:01.394 "model_number": "SPDK bdev Controller", 00:12:01.394 "max_namespaces": 32, 00:12:01.394 "min_cntlid": 1, 00:12:01.394 "max_cntlid": 65519, 00:12:01.395 "namespaces": [ 00:12:01.395 { 00:12:01.395 "nsid": 1, 00:12:01.395 "bdev_name": "Malloc2", 00:12:01.395 "name": "Malloc2", 00:12:01.395 "nguid": "1E5CD51D6D1A495681CF017A2CD01BCA", 00:12:01.395 "uuid": "1e5cd51d-6d1a-4956-81cf-017a2cd01bca" 00:12:01.395 } 00:12:01.395 ] 00:12:01.395 } 00:12:01.395 ] 00:12:01.395 19:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:01.395 19:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=228771 00:12:01.395 19:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:01.395 19:04:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:01.395 19:04:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:01.395 19:04:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:01.395 19:04:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:12:01.395 19:04:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:12:01.395 19:04:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:12:01.395 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.395 19:04:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:01.395 19:04:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:12:01.395 19:04:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:12:01.395 19:04:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:12:01.395 [2024-07-12 19:04:03.947702] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:01.654 19:04:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:01.654 19:04:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:01.654 19:04:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:01.654 19:04:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:01.654 19:04:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:01.654 Malloc3 00:12:01.654 19:04:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:01.913 [2024-07-12 19:04:04.381930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:01.913 19:04:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:01.913 Asynchronous Event Request test 00:12:01.913 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:01.913 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:01.913 Registering asynchronous event callbacks... 00:12:01.913 Starting namespace attribute notice tests for all controllers... 00:12:01.913 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:01.913 aer_cb - Changed Namespace 00:12:01.913 Cleaning up... 00:12:02.174 [ 00:12:02.174 { 00:12:02.174 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:02.174 "subtype": "Discovery", 00:12:02.174 "listen_addresses": [], 00:12:02.174 "allow_any_host": true, 00:12:02.174 "hosts": [] 00:12:02.174 }, 00:12:02.174 { 00:12:02.174 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:02.174 "subtype": "NVMe", 00:12:02.174 "listen_addresses": [ 00:12:02.174 { 00:12:02.174 "trtype": "VFIOUSER", 00:12:02.174 "adrfam": "IPv4", 00:12:02.174 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:02.174 "trsvcid": "0" 00:12:02.174 } 00:12:02.174 ], 00:12:02.174 "allow_any_host": true, 00:12:02.174 "hosts": [], 00:12:02.174 "serial_number": "SPDK1", 00:12:02.174 "model_number": "SPDK bdev Controller", 00:12:02.174 "max_namespaces": 32, 00:12:02.174 "min_cntlid": 1, 00:12:02.174 "max_cntlid": 65519, 00:12:02.174 "namespaces": [ 00:12:02.174 { 00:12:02.174 "nsid": 1, 00:12:02.174 "bdev_name": "Malloc1", 00:12:02.174 "name": "Malloc1", 00:12:02.174 "nguid": "AF079C778F694C409A02FBCBBB677F9A", 00:12:02.174 "uuid": "af079c77-8f69-4c40-9a02-fbcbbb677f9a" 00:12:02.174 }, 00:12:02.174 { 00:12:02.174 "nsid": 2, 00:12:02.174 "bdev_name": "Malloc3", 00:12:02.174 "name": "Malloc3", 00:12:02.174 "nguid": "D8CD77B8FC7D41ECBC1B44993E4709E2", 00:12:02.174 "uuid": "d8cd77b8-fc7d-41ec-bc1b-44993e4709e2" 00:12:02.174 } 00:12:02.174 ] 00:12:02.174 }, 00:12:02.174 { 00:12:02.174 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:02.174 "subtype": "NVMe", 00:12:02.174 "listen_addresses": [ 00:12:02.174 { 00:12:02.174 "trtype": "VFIOUSER", 00:12:02.174 "adrfam": "IPv4", 00:12:02.174 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:02.174 "trsvcid": "0" 00:12:02.174 } 00:12:02.174 ], 00:12:02.174 "allow_any_host": true, 00:12:02.174 "hosts": [], 00:12:02.174 "serial_number": "SPDK2", 00:12:02.174 "model_number": "SPDK bdev Controller", 00:12:02.174 "max_namespaces": 32, 00:12:02.174 "min_cntlid": 1, 00:12:02.174 "max_cntlid": 65519, 00:12:02.174 "namespaces": [ 00:12:02.174 { 00:12:02.174 "nsid": 1, 00:12:02.174 "bdev_name": "Malloc2", 00:12:02.174 "name": "Malloc2", 00:12:02.174 "nguid": "1E5CD51D6D1A495681CF017A2CD01BCA", 00:12:02.174 "uuid": "1e5cd51d-6d1a-4956-81cf-017a2cd01bca" 00:12:02.174 } 00:12:02.174 ] 00:12:02.174 } 00:12:02.174 ] 00:12:02.174 19:04:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 228771 00:12:02.174 19:04:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:02.174 19:04:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:02.174 19:04:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:02.174 19:04:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:02.174 [2024-07-12 19:04:04.601107] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:12:02.174 [2024-07-12 19:04:04.601130] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228967 ] 00:12:02.174 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.174 [2024-07-12 19:04:04.625605] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:02.174 [2024-07-12 19:04:04.635477] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:02.174 [2024-07-12 19:04:04.635497] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f466e08c000 00:12:02.174 [2024-07-12 19:04:04.636476] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.174 [2024-07-12 19:04:04.637478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.174 [2024-07-12 19:04:04.638489] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.174 [2024-07-12 19:04:04.639492] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:02.174 [2024-07-12 19:04:04.640500] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:02.174 [2024-07-12 19:04:04.641507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.174 [2024-07-12 19:04:04.642511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:02.174 [2024-07-12 19:04:04.643520] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.174 [2024-07-12 19:04:04.644526] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:02.174 [2024-07-12 19:04:04.644535] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f466e081000 00:12:02.174 [2024-07-12 19:04:04.645475] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:02.174 [2024-07-12 19:04:04.658676] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:02.174 [2024-07-12 19:04:04.658696] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:02.174 [2024-07-12 19:04:04.663789] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:02.174 [2024-07-12 19:04:04.663825] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:02.174 [2024-07-12 19:04:04.663891] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:02.174 [2024-07-12 19:04:04.663906] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:02.174 [2024-07-12 19:04:04.663911] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:02.174 [2024-07-12 19:04:04.664791] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:02.174 [2024-07-12 19:04:04.664800] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:02.174 [2024-07-12 19:04:04.664807] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:02.174 [2024-07-12 19:04:04.665797] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:02.174 [2024-07-12 19:04:04.665805] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:02.174 [2024-07-12 19:04:04.665811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:02.175 [2024-07-12 19:04:04.666807] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:02.175 [2024-07-12 19:04:04.666817] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:02.175 [2024-07-12 19:04:04.667813] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:02.175 [2024-07-12 19:04:04.667822] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:02.175 [2024-07-12 19:04:04.667826] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:02.175 [2024-07-12 19:04:04.667832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:02.175 [2024-07-12 19:04:04.667937] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:02.175 [2024-07-12 19:04:04.667941] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:02.175 [2024-07-12 19:04:04.667946] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:02.175 [2024-07-12 19:04:04.668830] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:02.175 [2024-07-12 19:04:04.669835] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:02.175 [2024-07-12 19:04:04.670845] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:02.175 [2024-07-12 19:04:04.671844] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:02.175 [2024-07-12 19:04:04.671881] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:02.175 [2024-07-12 19:04:04.672857] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:02.175 [2024-07-12 19:04:04.672866] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:02.175 [2024-07-12 19:04:04.672870] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.672887] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:02.175 [2024-07-12 19:04:04.672896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.672907] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:02.175 [2024-07-12 19:04:04.672912] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:02.175 [2024-07-12 19:04:04.672923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:02.175 [2024-07-12 19:04:04.680232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:02.175 [2024-07-12 19:04:04.680243] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:02.175 [2024-07-12 19:04:04.680249] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:02.175 [2024-07-12 19:04:04.680256] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:02.175 [2024-07-12 19:04:04.680260] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:02.175 [2024-07-12 19:04:04.680264] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:02.175 [2024-07-12 19:04:04.680268] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:02.175 [2024-07-12 19:04:04.680272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.680279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.680289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:02.175 [2024-07-12 19:04:04.688230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:02.175 [2024-07-12 19:04:04.688244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.175 [2024-07-12 19:04:04.688252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.175 [2024-07-12 19:04:04.688259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.175 [2024-07-12 19:04:04.688266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.175 [2024-07-12 19:04:04.688270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.688277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.688286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:02.175 [2024-07-12 19:04:04.696231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:02.175 [2024-07-12 19:04:04.696238] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:02.175 [2024-07-12 19:04:04.696243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.696249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.696254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.696262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:02.175 [2024-07-12 19:04:04.704228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:02.175 [2024-07-12 19:04:04.704280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.704287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.704294] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:02.175 [2024-07-12 19:04:04.704301] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:02.175 [2024-07-12 19:04:04.704307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:02.175 [2024-07-12 19:04:04.712231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:02.175 [2024-07-12 19:04:04.712241] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:02.175 [2024-07-12 19:04:04.712249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.712255] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.712261] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:02.175 [2024-07-12 19:04:04.712265] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:02.175 [2024-07-12 19:04:04.712271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:02.175 [2024-07-12 19:04:04.720230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:02.175 [2024-07-12 19:04:04.720243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.720250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.720256] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:02.175 [2024-07-12 19:04:04.720260] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:02.175 [2024-07-12 19:04:04.720266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:02.175 [2024-07-12 19:04:04.728230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:02.175 [2024-07-12 19:04:04.728239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.728246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.728252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.728258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.728262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.728267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.728271] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:02.175 [2024-07-12 19:04:04.728274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:02.175 [2024-07-12 19:04:04.728279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:02.175 [2024-07-12 19:04:04.728297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:02.175 [2024-07-12 19:04:04.736232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:02.175 [2024-07-12 19:04:04.736245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:02.436 [2024-07-12 19:04:04.744230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:02.436 [2024-07-12 19:04:04.744244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:02.436 [2024-07-12 19:04:04.752229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:02.436 [2024-07-12 19:04:04.752241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:02.436 [2024-07-12 19:04:04.760230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:02.436 [2024-07-12 19:04:04.760245] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:02.436 [2024-07-12 19:04:04.760250] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:02.436 [2024-07-12 19:04:04.760253] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:02.436 [2024-07-12 19:04:04.760256] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:02.436 [2024-07-12 19:04:04.760262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:02.436 [2024-07-12 19:04:04.760268] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:02.436 [2024-07-12 19:04:04.760272] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:02.436 [2024-07-12 19:04:04.760278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:02.436 [2024-07-12 19:04:04.760284] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:02.436 [2024-07-12 19:04:04.760288] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:02.436 [2024-07-12 19:04:04.760293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:02.436 [2024-07-12 19:04:04.760300] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:02.436 [2024-07-12 19:04:04.760304] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:02.436 [2024-07-12 19:04:04.760309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:02.436 [2024-07-12 19:04:04.768289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:02.436 [2024-07-12 19:04:04.768304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:02.437 [2024-07-12 19:04:04.768313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:02.437 [2024-07-12 19:04:04.768320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:02.437 ===================================================== 00:12:02.437 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:02.437 ===================================================== 00:12:02.437 Controller Capabilities/Features 00:12:02.437 ================================ 00:12:02.437 Vendor ID: 4e58 00:12:02.437 Subsystem Vendor ID: 4e58 00:12:02.437 Serial Number: SPDK2 00:12:02.437 Model Number: SPDK bdev Controller 00:12:02.437 Firmware Version: 24.09 00:12:02.437 Recommended Arb Burst: 6 00:12:02.437 IEEE OUI Identifier: 8d 6b 50 00:12:02.437 Multi-path I/O 00:12:02.437 May have multiple subsystem ports: Yes 00:12:02.437 May have multiple controllers: Yes 00:12:02.437 Associated with SR-IOV VF: No 00:12:02.437 Max Data Transfer Size: 131072 00:12:02.437 Max Number of Namespaces: 32 00:12:02.437 Max Number of I/O Queues: 127 00:12:02.437 NVMe Specification Version (VS): 1.3 00:12:02.437 NVMe Specification Version (Identify): 1.3 00:12:02.437 Maximum Queue Entries: 256 00:12:02.437 Contiguous Queues Required: Yes 00:12:02.437 Arbitration Mechanisms Supported 00:12:02.437 Weighted Round Robin: Not Supported 00:12:02.437 Vendor Specific: Not Supported 00:12:02.437 Reset Timeout: 15000 ms 00:12:02.437 Doorbell Stride: 4 bytes 00:12:02.437 NVM Subsystem Reset: Not Supported 00:12:02.437 Command Sets Supported 00:12:02.437 NVM Command Set: Supported 00:12:02.437 Boot Partition: Not Supported 00:12:02.437 Memory Page Size Minimum: 4096 bytes 00:12:02.437 Memory Page Size Maximum: 4096 bytes 00:12:02.437 Persistent Memory Region: Not Supported 00:12:02.437 Optional Asynchronous Events Supported 00:12:02.437 Namespace Attribute Notices: Supported 00:12:02.437 Firmware Activation Notices: Not Supported 00:12:02.437 ANA Change Notices: Not Supported 00:12:02.437 PLE Aggregate Log Change Notices: Not Supported 00:12:02.437 LBA Status Info Alert Notices: Not Supported 00:12:02.437 EGE Aggregate Log Change Notices: Not Supported 00:12:02.437 Normal NVM Subsystem Shutdown event: Not Supported 00:12:02.437 Zone Descriptor Change Notices: Not Supported 00:12:02.437 Discovery Log Change Notices: Not Supported 00:12:02.437 Controller Attributes 00:12:02.437 128-bit Host Identifier: Supported 00:12:02.437 Non-Operational Permissive Mode: Not Supported 00:12:02.437 NVM Sets: Not Supported 00:12:02.437 Read Recovery Levels: Not Supported 00:12:02.437 Endurance Groups: Not Supported 00:12:02.437 Predictable Latency Mode: Not Supported 00:12:02.437 Traffic Based Keep ALive: Not Supported 00:12:02.437 Namespace Granularity: Not Supported 00:12:02.437 SQ Associations: Not Supported 00:12:02.437 UUID List: Not Supported 00:12:02.437 Multi-Domain Subsystem: Not Supported 00:12:02.437 Fixed Capacity Management: Not Supported 00:12:02.437 Variable Capacity Management: Not Supported 00:12:02.437 Delete Endurance Group: Not Supported 00:12:02.437 Delete NVM Set: Not Supported 00:12:02.437 Extended LBA Formats Supported: Not Supported 00:12:02.437 Flexible Data Placement Supported: Not Supported 00:12:02.437 00:12:02.437 Controller Memory Buffer Support 00:12:02.437 ================================ 00:12:02.437 Supported: No 00:12:02.437 00:12:02.437 Persistent Memory Region Support 00:12:02.437 ================================ 00:12:02.437 Supported: No 00:12:02.437 00:12:02.437 Admin Command Set Attributes 00:12:02.437 ============================ 00:12:02.437 Security Send/Receive: Not Supported 00:12:02.437 Format NVM: Not Supported 00:12:02.437 Firmware Activate/Download: Not Supported 00:12:02.437 Namespace Management: Not Supported 00:12:02.437 Device Self-Test: Not Supported 00:12:02.437 Directives: Not Supported 00:12:02.437 NVMe-MI: Not Supported 00:12:02.437 Virtualization Management: Not Supported 00:12:02.437 Doorbell Buffer Config: Not Supported 00:12:02.437 Get LBA Status Capability: Not Supported 00:12:02.437 Command & Feature Lockdown Capability: Not Supported 00:12:02.437 Abort Command Limit: 4 00:12:02.437 Async Event Request Limit: 4 00:12:02.437 Number of Firmware Slots: N/A 00:12:02.437 Firmware Slot 1 Read-Only: N/A 00:12:02.437 Firmware Activation Without Reset: N/A 00:12:02.437 Multiple Update Detection Support: N/A 00:12:02.437 Firmware Update Granularity: No Information Provided 00:12:02.437 Per-Namespace SMART Log: No 00:12:02.437 Asymmetric Namespace Access Log Page: Not Supported 00:12:02.437 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:02.437 Command Effects Log Page: Supported 00:12:02.437 Get Log Page Extended Data: Supported 00:12:02.437 Telemetry Log Pages: Not Supported 00:12:02.437 Persistent Event Log Pages: Not Supported 00:12:02.437 Supported Log Pages Log Page: May Support 00:12:02.437 Commands Supported & Effects Log Page: Not Supported 00:12:02.437 Feature Identifiers & Effects Log Page:May Support 00:12:02.437 NVMe-MI Commands & Effects Log Page: May Support 00:12:02.437 Data Area 4 for Telemetry Log: Not Supported 00:12:02.437 Error Log Page Entries Supported: 128 00:12:02.437 Keep Alive: Supported 00:12:02.437 Keep Alive Granularity: 10000 ms 00:12:02.437 00:12:02.437 NVM Command Set Attributes 00:12:02.437 ========================== 00:12:02.437 Submission Queue Entry Size 00:12:02.437 Max: 64 00:12:02.437 Min: 64 00:12:02.437 Completion Queue Entry Size 00:12:02.437 Max: 16 00:12:02.437 Min: 16 00:12:02.437 Number of Namespaces: 32 00:12:02.437 Compare Command: Supported 00:12:02.437 Write Uncorrectable Command: Not Supported 00:12:02.437 Dataset Management Command: Supported 00:12:02.437 Write Zeroes Command: Supported 00:12:02.437 Set Features Save Field: Not Supported 00:12:02.437 Reservations: Not Supported 00:12:02.437 Timestamp: Not Supported 00:12:02.437 Copy: Supported 00:12:02.437 Volatile Write Cache: Present 00:12:02.437 Atomic Write Unit (Normal): 1 00:12:02.437 Atomic Write Unit (PFail): 1 00:12:02.437 Atomic Compare & Write Unit: 1 00:12:02.437 Fused Compare & Write: Supported 00:12:02.437 Scatter-Gather List 00:12:02.437 SGL Command Set: Supported (Dword aligned) 00:12:02.437 SGL Keyed: Not Supported 00:12:02.437 SGL Bit Bucket Descriptor: Not Supported 00:12:02.437 SGL Metadata Pointer: Not Supported 00:12:02.437 Oversized SGL: Not Supported 00:12:02.437 SGL Metadata Address: Not Supported 00:12:02.437 SGL Offset: Not Supported 00:12:02.437 Transport SGL Data Block: Not Supported 00:12:02.437 Replay Protected Memory Block: Not Supported 00:12:02.437 00:12:02.437 Firmware Slot Information 00:12:02.437 ========================= 00:12:02.437 Active slot: 1 00:12:02.437 Slot 1 Firmware Revision: 24.09 00:12:02.437 00:12:02.437 00:12:02.437 Commands Supported and Effects 00:12:02.437 ============================== 00:12:02.437 Admin Commands 00:12:02.437 -------------- 00:12:02.437 Get Log Page (02h): Supported 00:12:02.437 Identify (06h): Supported 00:12:02.437 Abort (08h): Supported 00:12:02.437 Set Features (09h): Supported 00:12:02.437 Get Features (0Ah): Supported 00:12:02.437 Asynchronous Event Request (0Ch): Supported 00:12:02.437 Keep Alive (18h): Supported 00:12:02.437 I/O Commands 00:12:02.437 ------------ 00:12:02.437 Flush (00h): Supported LBA-Change 00:12:02.437 Write (01h): Supported LBA-Change 00:12:02.437 Read (02h): Supported 00:12:02.437 Compare (05h): Supported 00:12:02.437 Write Zeroes (08h): Supported LBA-Change 00:12:02.437 Dataset Management (09h): Supported LBA-Change 00:12:02.437 Copy (19h): Supported LBA-Change 00:12:02.437 00:12:02.437 Error Log 00:12:02.437 ========= 00:12:02.437 00:12:02.437 Arbitration 00:12:02.437 =========== 00:12:02.437 Arbitration Burst: 1 00:12:02.437 00:12:02.437 Power Management 00:12:02.437 ================ 00:12:02.437 Number of Power States: 1 00:12:02.437 Current Power State: Power State #0 00:12:02.437 Power State #0: 00:12:02.437 Max Power: 0.00 W 00:12:02.437 Non-Operational State: Operational 00:12:02.437 Entry Latency: Not Reported 00:12:02.437 Exit Latency: Not Reported 00:12:02.437 Relative Read Throughput: 0 00:12:02.437 Relative Read Latency: 0 00:12:02.437 Relative Write Throughput: 0 00:12:02.437 Relative Write Latency: 0 00:12:02.437 Idle Power: Not Reported 00:12:02.437 Active Power: Not Reported 00:12:02.437 Non-Operational Permissive Mode: Not Supported 00:12:02.437 00:12:02.437 Health Information 00:12:02.437 ================== 00:12:02.437 Critical Warnings: 00:12:02.437 Available Spare Space: OK 00:12:02.437 Temperature: OK 00:12:02.437 Device Reliability: OK 00:12:02.437 Read Only: No 00:12:02.437 Volatile Memory Backup: OK 00:12:02.437 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:02.437 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:02.437 Available Spare: 0% 00:12:02.437 Available Sp[2024-07-12 19:04:04.768408] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:02.438 [2024-07-12 19:04:04.776229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:02.438 [2024-07-12 19:04:04.776262] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:02.438 [2024-07-12 19:04:04.776271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.438 [2024-07-12 19:04:04.776277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.438 [2024-07-12 19:04:04.776282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.438 [2024-07-12 19:04:04.776288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.438 [2024-07-12 19:04:04.776341] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:02.438 [2024-07-12 19:04:04.776351] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:02.438 [2024-07-12 19:04:04.777346] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:02.438 [2024-07-12 19:04:04.777392] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:02.438 [2024-07-12 19:04:04.777398] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:02.438 [2024-07-12 19:04:04.778348] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:02.438 [2024-07-12 19:04:04.778359] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:02.438 [2024-07-12 19:04:04.778404] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:02.438 [2024-07-12 19:04:04.779381] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:02.438 are Threshold: 0% 00:12:02.438 Life Percentage Used: 0% 00:12:02.438 Data Units Read: 0 00:12:02.438 Data Units Written: 0 00:12:02.438 Host Read Commands: 0 00:12:02.438 Host Write Commands: 0 00:12:02.438 Controller Busy Time: 0 minutes 00:12:02.438 Power Cycles: 0 00:12:02.438 Power On Hours: 0 hours 00:12:02.438 Unsafe Shutdowns: 0 00:12:02.438 Unrecoverable Media Errors: 0 00:12:02.438 Lifetime Error Log Entries: 0 00:12:02.438 Warning Temperature Time: 0 minutes 00:12:02.438 Critical Temperature Time: 0 minutes 00:12:02.438 00:12:02.438 Number of Queues 00:12:02.438 ================ 00:12:02.438 Number of I/O Submission Queues: 127 00:12:02.438 Number of I/O Completion Queues: 127 00:12:02.438 00:12:02.438 Active Namespaces 00:12:02.438 ================= 00:12:02.438 Namespace ID:1 00:12:02.438 Error Recovery Timeout: Unlimited 00:12:02.438 Command Set Identifier: NVM (00h) 00:12:02.438 Deallocate: Supported 00:12:02.438 Deallocated/Unwritten Error: Not Supported 00:12:02.438 Deallocated Read Value: Unknown 00:12:02.438 Deallocate in Write Zeroes: Not Supported 00:12:02.438 Deallocated Guard Field: 0xFFFF 00:12:02.438 Flush: Supported 00:12:02.438 Reservation: Supported 00:12:02.438 Namespace Sharing Capabilities: Multiple Controllers 00:12:02.438 Size (in LBAs): 131072 (0GiB) 00:12:02.438 Capacity (in LBAs): 131072 (0GiB) 00:12:02.438 Utilization (in LBAs): 131072 (0GiB) 00:12:02.438 NGUID: 1E5CD51D6D1A495681CF017A2CD01BCA 00:12:02.438 UUID: 1e5cd51d-6d1a-4956-81cf-017a2cd01bca 00:12:02.438 Thin Provisioning: Not Supported 00:12:02.438 Per-NS Atomic Units: Yes 00:12:02.438 Atomic Boundary Size (Normal): 0 00:12:02.438 Atomic Boundary Size (PFail): 0 00:12:02.438 Atomic Boundary Offset: 0 00:12:02.438 Maximum Single Source Range Length: 65535 00:12:02.438 Maximum Copy Length: 65535 00:12:02.438 Maximum Source Range Count: 1 00:12:02.438 NGUID/EUI64 Never Reused: No 00:12:02.438 Namespace Write Protected: No 00:12:02.438 Number of LBA Formats: 1 00:12:02.438 Current LBA Format: LBA Format #00 00:12:02.438 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:02.438 00:12:02.438 19:04:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:02.438 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.438 [2024-07-12 19:04:04.991774] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:07.715 Initializing NVMe Controllers 00:12:07.715 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:07.715 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:07.715 Initialization complete. Launching workers. 00:12:07.715 ======================================================== 00:12:07.715 Latency(us) 00:12:07.715 Device Information : IOPS MiB/s Average min max 00:12:07.715 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39943.01 156.03 3204.38 943.68 9390.01 00:12:07.715 ======================================================== 00:12:07.715 Total : 39943.01 156.03 3204.38 943.68 9390.01 00:12:07.715 00:12:07.715 [2024-07-12 19:04:10.101469] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:07.715 19:04:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:07.715 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.974 [2024-07-12 19:04:10.320142] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:13.250 Initializing NVMe Controllers 00:12:13.250 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:13.250 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:13.250 Initialization complete. Launching workers. 00:12:13.250 ======================================================== 00:12:13.250 Latency(us) 00:12:13.250 Device Information : IOPS MiB/s Average min max 00:12:13.250 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39854.32 155.68 3211.29 987.32 10585.39 00:12:13.250 ======================================================== 00:12:13.250 Total : 39854.32 155.68 3211.29 987.32 10585.39 00:12:13.250 00:12:13.250 [2024-07-12 19:04:15.339352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:13.251 19:04:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:13.251 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.251 [2024-07-12 19:04:15.531710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.531 [2024-07-12 19:04:20.664320] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.531 Initializing NVMe Controllers 00:12:18.531 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:18.531 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:18.531 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:18.531 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:18.531 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:18.531 Initialization complete. Launching workers. 00:12:18.531 Starting thread on core 2 00:12:18.531 Starting thread on core 3 00:12:18.531 Starting thread on core 1 00:12:18.531 19:04:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:18.531 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.531 [2024-07-12 19:04:20.946669] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:22.737 [2024-07-12 19:04:24.810431] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:22.737 Initializing NVMe Controllers 00:12:22.737 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:22.737 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:22.737 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:22.737 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:22.737 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:22.737 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:22.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:22.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:22.737 Initialization complete. Launching workers. 00:12:22.737 Starting thread on core 1 with urgent priority queue 00:12:22.737 Starting thread on core 2 with urgent priority queue 00:12:22.737 Starting thread on core 3 with urgent priority queue 00:12:22.737 Starting thread on core 0 with urgent priority queue 00:12:22.737 SPDK bdev Controller (SPDK2 ) core 0: 1935.00 IO/s 51.68 secs/100000 ios 00:12:22.737 SPDK bdev Controller (SPDK2 ) core 1: 2063.67 IO/s 48.46 secs/100000 ios 00:12:22.737 SPDK bdev Controller (SPDK2 ) core 2: 2383.67 IO/s 41.95 secs/100000 ios 00:12:22.737 SPDK bdev Controller (SPDK2 ) core 3: 1505.33 IO/s 66.43 secs/100000 ios 00:12:22.737 ======================================================== 00:12:22.737 00:12:22.737 19:04:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:22.737 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.737 [2024-07-12 19:04:25.076361] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:22.737 Initializing NVMe Controllers 00:12:22.737 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:22.737 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:22.737 Namespace ID: 1 size: 0GB 00:12:22.737 Initialization complete. 00:12:22.737 INFO: using host memory buffer for IO 00:12:22.737 Hello world! 00:12:22.737 [2024-07-12 19:04:25.088447] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:22.737 19:04:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:22.737 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.997 [2024-07-12 19:04:25.349823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:23.945 Initializing NVMe Controllers 00:12:23.945 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:23.945 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:23.945 Initialization complete. Launching workers. 00:12:23.945 submit (in ns) avg, min, max = 8449.0, 3208.7, 4000987.8 00:12:23.945 complete (in ns) avg, min, max = 19889.4, 1757.4, 3999686.1 00:12:23.945 00:12:23.945 Submit histogram 00:12:23.945 ================ 00:12:23.945 Range in us Cumulative Count 00:12:23.945 3.200 - 3.214: 0.0061% ( 1) 00:12:23.945 3.214 - 3.228: 0.0184% ( 2) 00:12:23.945 3.228 - 3.242: 0.0429% ( 4) 00:12:23.945 3.242 - 3.256: 0.0613% ( 3) 00:12:23.945 3.256 - 3.270: 0.1165% ( 9) 00:12:23.945 3.270 - 3.283: 0.4169% ( 49) 00:12:23.945 3.283 - 3.297: 2.1277% ( 279) 00:12:23.945 3.297 - 3.311: 5.4878% ( 548) 00:12:23.945 3.311 - 3.325: 9.2158% ( 608) 00:12:23.945 3.325 - 3.339: 13.4711% ( 694) 00:12:23.945 3.339 - 3.353: 18.8791% ( 882) 00:12:23.945 3.353 - 3.367: 24.5447% ( 924) 00:12:23.945 3.367 - 3.381: 30.2226% ( 926) 00:12:23.945 3.381 - 3.395: 35.7839% ( 907) 00:12:23.945 3.395 - 3.409: 40.7689% ( 813) 00:12:23.945 3.409 - 3.423: 45.1714% ( 718) 00:12:23.945 3.423 - 3.437: 50.4139% ( 855) 00:12:23.945 3.437 - 3.450: 57.0789% ( 1087) 00:12:23.945 3.450 - 3.464: 62.1191% ( 822) 00:12:23.945 3.464 - 3.478: 66.2088% ( 667) 00:12:23.945 3.478 - 3.492: 71.5924% ( 878) 00:12:23.945 3.492 - 3.506: 76.5467% ( 808) 00:12:23.945 3.506 - 3.520: 79.9313% ( 552) 00:12:23.945 3.520 - 3.534: 82.3288% ( 391) 00:12:23.945 3.534 - 3.548: 84.5423% ( 361) 00:12:23.945 3.548 - 3.562: 85.9096% ( 223) 00:12:23.945 3.562 - 3.590: 87.7675% ( 303) 00:12:23.945 3.590 - 3.617: 89.3004% ( 250) 00:12:23.945 3.617 - 3.645: 90.7107% ( 230) 00:12:23.945 3.645 - 3.673: 92.2006% ( 243) 00:12:23.945 3.673 - 3.701: 94.0462% ( 301) 00:12:23.945 3.701 - 3.729: 95.7876% ( 284) 00:12:23.945 3.729 - 3.757: 97.1672% ( 225) 00:12:23.945 3.757 - 3.784: 98.1728% ( 164) 00:12:23.945 3.784 - 3.812: 98.7859% ( 100) 00:12:23.945 3.812 - 3.840: 99.1293% ( 56) 00:12:23.945 3.840 - 3.868: 99.4052% ( 45) 00:12:23.945 3.868 - 3.896: 99.4911% ( 14) 00:12:23.945 3.896 - 3.923: 99.5708% ( 13) 00:12:23.945 3.923 - 3.951: 99.5831% ( 2) 00:12:23.945 3.951 - 3.979: 99.5892% ( 1) 00:12:23.945 4.981 - 5.009: 99.5953% ( 1) 00:12:23.945 5.176 - 5.203: 99.6014% ( 1) 00:12:23.945 5.231 - 5.259: 99.6076% ( 1) 00:12:23.945 5.343 - 5.370: 99.6137% ( 1) 00:12:23.946 5.426 - 5.454: 99.6198% ( 1) 00:12:23.946 5.482 - 5.510: 99.6260% ( 1) 00:12:23.946 5.510 - 5.537: 99.6382% ( 2) 00:12:23.946 5.537 - 5.565: 99.6505% ( 2) 00:12:23.946 5.593 - 5.621: 99.6566% ( 1) 00:12:23.946 5.704 - 5.732: 99.6689% ( 2) 00:12:23.946 5.927 - 5.955: 99.6750% ( 1) 00:12:23.946 5.983 - 6.010: 99.6812% ( 1) 00:12:23.946 6.010 - 6.038: 99.6873% ( 1) 00:12:23.946 6.094 - 6.122: 99.6934% ( 1) 00:12:23.946 6.122 - 6.150: 99.6996% ( 1) 00:12:23.946 6.511 - 6.539: 99.7057% ( 1) 00:12:23.946 6.650 - 6.678: 99.7118% ( 1) 00:12:23.946 6.678 - 6.706: 99.7241% ( 2) 00:12:23.946 6.873 - 6.901: 99.7302% ( 1) 00:12:23.946 6.984 - 7.012: 99.7486% ( 3) 00:12:23.946 7.040 - 7.068: 99.7609% ( 2) 00:12:23.946 7.123 - 7.179: 99.7670% ( 1) 00:12:23.946 7.179 - 7.235: 99.7731% ( 1) 00:12:23.946 7.290 - 7.346: 99.7793% ( 1) 00:12:23.946 7.513 - 7.569: 99.7915% ( 2) 00:12:23.946 7.680 - 7.736: 99.8038% ( 2) 00:12:23.946 7.791 - 7.847: 99.8099% ( 1) 00:12:23.946 7.847 - 7.903: 99.8161% ( 1) 00:12:23.946 8.181 - 8.237: 99.8222% ( 1) 00:12:23.946 8.237 - 8.292: 99.8283% ( 1) 00:12:23.946 8.459 - 8.515: 99.8344% ( 1) 00:12:23.946 8.515 - 8.570: 99.8406% ( 1) 00:12:23.946 8.570 - 8.626: 99.8467% ( 1) 00:12:23.946 8.793 - 8.849: 99.8528% ( 1) 00:12:23.946 9.016 - 9.071: 99.8590% ( 1) 00:12:23.946 13.134 - 13.190: 99.8651% ( 1) 00:12:23.946 13.802 - 13.857: 99.8712% ( 1) 00:12:23.946 1431.819 - 1438.943: 99.8774% ( 1) 00:12:23.946 [2024-07-12 19:04:26.451300] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:23.946 3989.148 - 4017.642: 100.0000% ( 20) 00:12:23.946 00:12:23.946 Complete histogram 00:12:23.946 ================== 00:12:23.946 Range in us Cumulative Count 00:12:23.946 1.753 - 1.760: 0.0061% ( 1) 00:12:23.946 1.760 - 1.767: 0.0184% ( 2) 00:12:23.946 1.767 - 1.774: 0.0858% ( 11) 00:12:23.946 1.774 - 1.781: 0.1349% ( 8) 00:12:23.946 1.781 - 1.795: 0.1410% ( 1) 00:12:23.946 1.795 - 1.809: 0.4660% ( 53) 00:12:23.946 1.809 - 1.823: 11.3496% ( 1775) 00:12:23.946 1.823 - 1.837: 24.4160% ( 2131) 00:12:23.946 1.837 - 1.850: 26.7889% ( 387) 00:12:23.946 1.850 - 1.864: 30.4065% ( 590) 00:12:23.946 1.864 - 1.878: 65.0316% ( 5647) 00:12:23.946 1.878 - 1.892: 90.5144% ( 4156) 00:12:23.946 1.892 - 1.906: 94.8924% ( 714) 00:12:23.946 1.906 - 1.920: 96.5234% ( 266) 00:12:23.946 1.920 - 1.934: 96.9771% ( 74) 00:12:23.946 1.934 - 1.948: 97.8233% ( 138) 00:12:23.946 1.948 - 1.962: 98.6633% ( 137) 00:12:23.946 1.962 - 1.976: 99.1845% ( 85) 00:12:23.946 1.976 - 1.990: 99.3071% ( 20) 00:12:23.946 1.990 - 2.003: 99.3378% ( 5) 00:12:23.946 2.017 - 2.031: 99.3439% ( 1) 00:12:23.946 2.031 - 2.045: 99.3501% ( 1) 00:12:23.946 2.184 - 2.198: 99.3562% ( 1) 00:12:23.946 3.673 - 3.701: 99.3623% ( 1) 00:12:23.946 3.757 - 3.784: 99.3684% ( 1) 00:12:23.946 3.812 - 3.840: 99.3746% ( 1) 00:12:23.946 4.007 - 4.035: 99.3807% ( 1) 00:12:23.946 4.174 - 4.202: 99.3868% ( 1) 00:12:23.946 4.647 - 4.675: 99.3930% ( 1) 00:12:23.946 4.675 - 4.703: 99.3991% ( 1) 00:12:23.946 4.730 - 4.758: 99.4052% ( 1) 00:12:23.946 4.925 - 4.953: 99.4114% ( 1) 00:12:23.946 5.148 - 5.176: 99.4175% ( 1) 00:12:23.946 5.176 - 5.203: 99.4236% ( 1) 00:12:23.946 5.203 - 5.231: 99.4298% ( 1) 00:12:23.946 5.315 - 5.343: 99.4359% ( 1) 00:12:23.946 5.370 - 5.398: 99.4420% ( 1) 00:12:23.946 5.454 - 5.482: 99.4482% ( 1) 00:12:23.946 5.510 - 5.537: 99.4543% ( 1) 00:12:23.946 5.537 - 5.565: 99.4604% ( 1) 00:12:23.946 5.704 - 5.732: 99.4666% ( 1) 00:12:23.946 5.732 - 5.760: 99.4727% ( 1) 00:12:23.946 5.843 - 5.871: 99.4788% ( 1) 00:12:23.946 5.899 - 5.927: 99.4849% ( 1) 00:12:23.946 6.150 - 6.177: 99.4911% ( 1) 00:12:23.946 6.177 - 6.205: 99.4972% ( 1) 00:12:23.946 6.205 - 6.233: 99.5033% ( 1) 00:12:23.946 6.456 - 6.483: 99.5095% ( 1) 00:12:23.946 7.179 - 7.235: 99.5156% ( 1) 00:12:23.946 7.457 - 7.513: 99.5217% ( 1) 00:12:23.946 12.299 - 12.355: 99.5279% ( 1) 00:12:23.946 13.412 - 13.468: 99.5340% ( 1) 00:12:23.946 14.080 - 14.136: 99.5401% ( 1) 00:12:23.946 38.957 - 39.179: 99.5463% ( 1) 00:12:23.946 2165.537 - 2179.784: 99.5524% ( 1) 00:12:23.946 3989.148 - 4017.642: 100.0000% ( 73) 00:12:23.946 00:12:23.946 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:23.946 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:23.946 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:23.946 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:23.946 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:24.206 [ 00:12:24.206 { 00:12:24.206 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:24.206 "subtype": "Discovery", 00:12:24.206 "listen_addresses": [], 00:12:24.206 "allow_any_host": true, 00:12:24.206 "hosts": [] 00:12:24.206 }, 00:12:24.206 { 00:12:24.206 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:24.206 "subtype": "NVMe", 00:12:24.206 "listen_addresses": [ 00:12:24.206 { 00:12:24.206 "trtype": "VFIOUSER", 00:12:24.206 "adrfam": "IPv4", 00:12:24.206 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:24.206 "trsvcid": "0" 00:12:24.206 } 00:12:24.206 ], 00:12:24.206 "allow_any_host": true, 00:12:24.206 "hosts": [], 00:12:24.206 "serial_number": "SPDK1", 00:12:24.206 "model_number": "SPDK bdev Controller", 00:12:24.206 "max_namespaces": 32, 00:12:24.206 "min_cntlid": 1, 00:12:24.206 "max_cntlid": 65519, 00:12:24.206 "namespaces": [ 00:12:24.206 { 00:12:24.206 "nsid": 1, 00:12:24.206 "bdev_name": "Malloc1", 00:12:24.206 "name": "Malloc1", 00:12:24.206 "nguid": "AF079C778F694C409A02FBCBBB677F9A", 00:12:24.206 "uuid": "af079c77-8f69-4c40-9a02-fbcbbb677f9a" 00:12:24.206 }, 00:12:24.206 { 00:12:24.206 "nsid": 2, 00:12:24.206 "bdev_name": "Malloc3", 00:12:24.206 "name": "Malloc3", 00:12:24.206 "nguid": "D8CD77B8FC7D41ECBC1B44993E4709E2", 00:12:24.206 "uuid": "d8cd77b8-fc7d-41ec-bc1b-44993e4709e2" 00:12:24.206 } 00:12:24.206 ] 00:12:24.206 }, 00:12:24.206 { 00:12:24.206 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:24.206 "subtype": "NVMe", 00:12:24.206 "listen_addresses": [ 00:12:24.206 { 00:12:24.206 "trtype": "VFIOUSER", 00:12:24.206 "adrfam": "IPv4", 00:12:24.206 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:24.206 "trsvcid": "0" 00:12:24.206 } 00:12:24.206 ], 00:12:24.206 "allow_any_host": true, 00:12:24.206 "hosts": [], 00:12:24.206 "serial_number": "SPDK2", 00:12:24.206 "model_number": "SPDK bdev Controller", 00:12:24.206 "max_namespaces": 32, 00:12:24.206 "min_cntlid": 1, 00:12:24.206 "max_cntlid": 65519, 00:12:24.206 "namespaces": [ 00:12:24.206 { 00:12:24.206 "nsid": 1, 00:12:24.206 "bdev_name": "Malloc2", 00:12:24.206 "name": "Malloc2", 00:12:24.206 "nguid": "1E5CD51D6D1A495681CF017A2CD01BCA", 00:12:24.206 "uuid": "1e5cd51d-6d1a-4956-81cf-017a2cd01bca" 00:12:24.206 } 00:12:24.206 ] 00:12:24.206 } 00:12:24.206 ] 00:12:24.206 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:24.206 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:24.206 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=232636 00:12:24.206 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:24.206 19:04:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:24.206 19:04:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:24.206 19:04:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:12:24.206 19:04:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:12:24.206 19:04:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:12:24.206 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.466 19:04:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:24.466 19:04:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:12:24.466 19:04:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:12:24.466 19:04:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:12:24.466 [2024-07-12 19:04:26.817697] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:24.466 19:04:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:24.466 19:04:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:24.466 19:04:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:24.466 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:24.466 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:24.726 Malloc4 00:12:24.726 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:24.726 [2024-07-12 19:04:27.245861] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:24.726 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:24.726 Asynchronous Event Request test 00:12:24.726 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:24.726 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:24.726 Registering asynchronous event callbacks... 00:12:24.726 Starting namespace attribute notice tests for all controllers... 00:12:24.726 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:24.726 aer_cb - Changed Namespace 00:12:24.726 Cleaning up... 00:12:24.987 [ 00:12:24.987 { 00:12:24.987 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:24.987 "subtype": "Discovery", 00:12:24.987 "listen_addresses": [], 00:12:24.987 "allow_any_host": true, 00:12:24.987 "hosts": [] 00:12:24.987 }, 00:12:24.987 { 00:12:24.987 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:24.987 "subtype": "NVMe", 00:12:24.987 "listen_addresses": [ 00:12:24.987 { 00:12:24.987 "trtype": "VFIOUSER", 00:12:24.987 "adrfam": "IPv4", 00:12:24.987 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:24.987 "trsvcid": "0" 00:12:24.987 } 00:12:24.987 ], 00:12:24.987 "allow_any_host": true, 00:12:24.987 "hosts": [], 00:12:24.987 "serial_number": "SPDK1", 00:12:24.987 "model_number": "SPDK bdev Controller", 00:12:24.987 "max_namespaces": 32, 00:12:24.987 "min_cntlid": 1, 00:12:24.987 "max_cntlid": 65519, 00:12:24.987 "namespaces": [ 00:12:24.987 { 00:12:24.987 "nsid": 1, 00:12:24.987 "bdev_name": "Malloc1", 00:12:24.987 "name": "Malloc1", 00:12:24.987 "nguid": "AF079C778F694C409A02FBCBBB677F9A", 00:12:24.987 "uuid": "af079c77-8f69-4c40-9a02-fbcbbb677f9a" 00:12:24.987 }, 00:12:24.987 { 00:12:24.987 "nsid": 2, 00:12:24.987 "bdev_name": "Malloc3", 00:12:24.987 "name": "Malloc3", 00:12:24.987 "nguid": "D8CD77B8FC7D41ECBC1B44993E4709E2", 00:12:24.987 "uuid": "d8cd77b8-fc7d-41ec-bc1b-44993e4709e2" 00:12:24.987 } 00:12:24.987 ] 00:12:24.987 }, 00:12:24.987 { 00:12:24.987 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:24.987 "subtype": "NVMe", 00:12:24.987 "listen_addresses": [ 00:12:24.987 { 00:12:24.987 "trtype": "VFIOUSER", 00:12:24.987 "adrfam": "IPv4", 00:12:24.987 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:24.987 "trsvcid": "0" 00:12:24.987 } 00:12:24.987 ], 00:12:24.987 "allow_any_host": true, 00:12:24.987 "hosts": [], 00:12:24.987 "serial_number": "SPDK2", 00:12:24.987 "model_number": "SPDK bdev Controller", 00:12:24.987 "max_namespaces": 32, 00:12:24.987 "min_cntlid": 1, 00:12:24.987 "max_cntlid": 65519, 00:12:24.987 "namespaces": [ 00:12:24.987 { 00:12:24.987 "nsid": 1, 00:12:24.987 "bdev_name": "Malloc2", 00:12:24.987 "name": "Malloc2", 00:12:24.987 "nguid": "1E5CD51D6D1A495681CF017A2CD01BCA", 00:12:24.987 "uuid": "1e5cd51d-6d1a-4956-81cf-017a2cd01bca" 00:12:24.987 }, 00:12:24.987 { 00:12:24.987 "nsid": 2, 00:12:24.987 "bdev_name": "Malloc4", 00:12:24.987 "name": "Malloc4", 00:12:24.987 "nguid": "4B5ED7D85CC04F5BA6D99EDB89309BC3", 00:12:24.987 "uuid": "4b5ed7d8-5cc0-4f5b-a6d9-9edb89309bc3" 00:12:24.987 } 00:12:24.987 ] 00:12:24.987 } 00:12:24.987 ] 00:12:24.987 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 232636 00:12:24.987 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:24.987 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 224787 00:12:24.987 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 224787 ']' 00:12:24.987 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 224787 00:12:24.987 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:24.987 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:24.987 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 224787 00:12:24.987 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:24.987 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:24.987 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 224787' 00:12:24.987 killing process with pid 224787 00:12:24.987 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 224787 00:12:24.987 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 224787 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=232864 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 232864' 00:12:25.246 Process pid: 232864 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 232864 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 232864 ']' 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:25.246 19:04:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:25.246 [2024-07-12 19:04:27.811634] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:25.246 [2024-07-12 19:04:27.812483] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:12:25.246 [2024-07-12 19:04:27.812521] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.506 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.506 [2024-07-12 19:04:27.880698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.506 [2024-07-12 19:04:27.952845] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.506 [2024-07-12 19:04:27.952885] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.506 [2024-07-12 19:04:27.952892] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.506 [2024-07-12 19:04:27.952898] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.506 [2024-07-12 19:04:27.952903] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.506 [2024-07-12 19:04:27.953010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.506 [2024-07-12 19:04:27.953119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.506 [2024-07-12 19:04:27.953230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.506 [2024-07-12 19:04:27.953244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.506 [2024-07-12 19:04:28.038544] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:25.506 [2024-07-12 19:04:28.038893] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:25.506 [2024-07-12 19:04:28.039008] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:25.506 [2024-07-12 19:04:28.039017] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:25.506 [2024-07-12 19:04:28.039401] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:26.075 19:04:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.075 19:04:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:26.075 19:04:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:27.457 19:04:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:27.457 19:04:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:27.457 19:04:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:27.457 19:04:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:27.457 19:04:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:27.457 19:04:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:27.457 Malloc1 00:12:27.457 19:04:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:27.716 19:04:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:27.975 19:04:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:27.975 19:04:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:27.975 19:04:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:27.975 19:04:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:28.235 Malloc2 00:12:28.235 19:04:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:28.495 19:04:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:28.756 19:04:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:28.756 19:04:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:28.756 19:04:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 232864 00:12:28.756 19:04:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 232864 ']' 00:12:28.756 19:04:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 232864 00:12:28.756 19:04:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:28.756 19:04:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:28.756 19:04:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 232864 00:12:28.756 19:04:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:28.756 19:04:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:28.756 19:04:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 232864' 00:12:28.756 killing process with pid 232864 00:12:28.756 19:04:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 232864 00:12:28.756 19:04:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 232864 00:12:29.016 19:04:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:29.016 19:04:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:29.016 00:12:29.016 real 0m52.424s 00:12:29.016 user 3m27.372s 00:12:29.016 sys 0m3.611s 00:12:29.016 19:04:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.016 19:04:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:29.016 ************************************ 00:12:29.016 END TEST nvmf_vfio_user 00:12:29.016 ************************************ 00:12:29.016 19:04:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:29.016 19:04:31 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:29.016 19:04:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:29.016 19:04:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.016 19:04:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.016 ************************************ 00:12:29.016 START TEST nvmf_vfio_user_nvme_compliance 00:12:29.016 ************************************ 00:12:29.016 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:29.276 * Looking for test storage... 00:12:29.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.276 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=233478 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 233478' 00:12:29.277 Process pid: 233478 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 233478 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 233478 ']' 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.277 19:04:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:29.277 [2024-07-12 19:04:31.746583] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:12:29.277 [2024-07-12 19:04:31.746631] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.277 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.277 [2024-07-12 19:04:31.815522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:29.536 [2024-07-12 19:04:31.894751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.536 [2024-07-12 19:04:31.894786] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.536 [2024-07-12 19:04:31.894797] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.536 [2024-07-12 19:04:31.894802] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.536 [2024-07-12 19:04:31.894807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.536 [2024-07-12 19:04:31.894866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.536 [2024-07-12 19:04:31.894974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.536 [2024-07-12 19:04:31.894975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.104 19:04:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.104 19:04:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:30.104 19:04:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:31.040 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:31.040 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:31.040 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:31.040 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.040 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:31.040 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.040 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:31.040 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:31.040 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.040 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:31.299 malloc0 00:12:31.299 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.299 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:31.299 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.299 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:31.299 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.299 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:31.299 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.299 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:31.299 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.299 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:31.299 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.299 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:31.299 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.299 19:04:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:31.299 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.299 00:12:31.299 00:12:31.299 CUnit - A unit testing framework for C - Version 2.1-3 00:12:31.299 http://cunit.sourceforge.net/ 00:12:31.299 00:12:31.299 00:12:31.299 Suite: nvme_compliance 00:12:31.299 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 19:04:33.787726] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.299 [2024-07-12 19:04:33.789071] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:31.299 [2024-07-12 19:04:33.789085] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:31.299 [2024-07-12 19:04:33.789091] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:31.299 [2024-07-12 19:04:33.790744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.299 passed 00:12:31.558 Test: admin_identify_ctrlr_verify_fused ...[2024-07-12 19:04:33.867309] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.558 [2024-07-12 19:04:33.870328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.558 passed 00:12:31.558 Test: admin_identify_ns ...[2024-07-12 19:04:33.950643] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.558 [2024-07-12 19:04:34.010234] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:31.558 [2024-07-12 19:04:34.018234] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:31.558 [2024-07-12 19:04:34.042341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.558 passed 00:12:31.558 Test: admin_get_features_mandatory_features ...[2024-07-12 19:04:34.115486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.558 [2024-07-12 19:04:34.118507] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.816 passed 00:12:31.816 Test: admin_get_features_optional_features ...[2024-07-12 19:04:34.200016] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.816 [2024-07-12 19:04:34.203035] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.816 passed 00:12:31.816 Test: admin_set_features_number_of_queues ...[2024-07-12 19:04:34.281756] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.075 [2024-07-12 19:04:34.387315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.075 passed 00:12:32.075 Test: admin_get_log_page_mandatory_logs ...[2024-07-12 19:04:34.459614] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.075 [2024-07-12 19:04:34.462634] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.075 passed 00:12:32.075 Test: admin_get_log_page_with_lpo ...[2024-07-12 19:04:34.541603] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.075 [2024-07-12 19:04:34.610235] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:32.075 [2024-07-12 19:04:34.623310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.334 passed 00:12:32.334 Test: fabric_property_get ...[2024-07-12 19:04:34.695631] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.334 [2024-07-12 19:04:34.696871] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:32.334 [2024-07-12 19:04:34.698654] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.334 passed 00:12:32.334 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 19:04:34.779169] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.334 [2024-07-12 19:04:34.780406] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:32.334 [2024-07-12 19:04:34.782183] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.334 passed 00:12:32.334 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 19:04:34.859723] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.592 [2024-07-12 19:04:34.943238] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:32.592 [2024-07-12 19:04:34.959235] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:32.592 [2024-07-12 19:04:34.964318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.592 passed 00:12:32.592 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 19:04:35.042257] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.592 [2024-07-12 19:04:35.043511] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:32.592 [2024-07-12 19:04:35.045279] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.592 passed 00:12:32.592 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 19:04:35.124192] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.851 [2024-07-12 19:04:35.201236] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:32.851 [2024-07-12 19:04:35.225230] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:32.851 [2024-07-12 19:04:35.230316] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.851 passed 00:12:32.851 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 19:04:35.305470] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.851 [2024-07-12 19:04:35.306706] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:32.851 [2024-07-12 19:04:35.306731] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:32.851 [2024-07-12 19:04:35.308501] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.851 passed 00:12:32.851 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 19:04:35.386432] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:33.111 [2024-07-12 19:04:35.474240] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:33.111 [2024-07-12 19:04:35.482231] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:33.111 [2024-07-12 19:04:35.490234] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:33.111 [2024-07-12 19:04:35.498234] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:33.111 [2024-07-12 19:04:35.527305] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:33.111 passed 00:12:33.111 Test: admin_create_io_sq_verify_pc ...[2024-07-12 19:04:35.605167] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:33.111 [2024-07-12 19:04:35.620241] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:33.111 [2024-07-12 19:04:35.636329] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:33.111 passed 00:12:33.370 Test: admin_create_io_qp_max_qps ...[2024-07-12 19:04:35.714886] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:34.307 [2024-07-12 19:04:36.817235] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:34.874 [2024-07-12 19:04:37.201365] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:34.874 passed 00:12:34.874 Test: admin_create_io_sq_shared_cq ...[2024-07-12 19:04:37.277380] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:34.874 [2024-07-12 19:04:37.411233] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:35.133 [2024-07-12 19:04:37.448297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:35.133 passed 00:12:35.133 00:12:35.133 Run Summary: Type Total Ran Passed Failed Inactive 00:12:35.133 suites 1 1 n/a 0 0 00:12:35.133 tests 18 18 18 0 0 00:12:35.133 asserts 360 360 360 0 n/a 00:12:35.133 00:12:35.133 Elapsed time = 1.508 seconds 00:12:35.133 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 233478 00:12:35.133 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 233478 ']' 00:12:35.133 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 233478 00:12:35.133 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:35.133 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:35.133 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 233478 00:12:35.133 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:35.133 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:35.133 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 233478' 00:12:35.133 killing process with pid 233478 00:12:35.133 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 233478 00:12:35.133 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 233478 00:12:35.392 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:35.392 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:35.392 00:12:35.392 real 0m6.159s 00:12:35.392 user 0m17.584s 00:12:35.392 sys 0m0.448s 00:12:35.392 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.392 19:04:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:35.392 ************************************ 00:12:35.392 END TEST nvmf_vfio_user_nvme_compliance 00:12:35.392 ************************************ 00:12:35.392 19:04:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:35.392 19:04:37 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:35.392 19:04:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:35.392 19:04:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.392 19:04:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:35.392 ************************************ 00:12:35.392 START TEST nvmf_vfio_user_fuzz 00:12:35.392 ************************************ 00:12:35.392 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:35.392 * Looking for test storage... 00:12:35.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.392 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.392 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:35.392 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.392 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=234629 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 234629' 00:12:35.393 Process pid: 234629 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 234629 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 234629 ']' 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.393 19:04:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:36.331 19:04:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.331 19:04:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:36.331 19:04:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:37.267 malloc0 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.267 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:37.526 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.526 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:37.526 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.526 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:37.526 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.526 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:37.526 19:04:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:09.627 Fuzzing completed. Shutting down the fuzz application 00:13:09.627 00:13:09.627 Dumping successful admin opcodes: 00:13:09.627 8, 9, 10, 24, 00:13:09.627 Dumping successful io opcodes: 00:13:09.627 0, 00:13:09.627 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1099703, total successful commands: 4330, random_seed: 771086848 00:13:09.627 NS: 0x200003a1ef00 admin qp, Total commands completed: 270399, total successful commands: 2178, random_seed: 559992896 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 234629 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 234629 ']' 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 234629 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 234629 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 234629' 00:13:09.627 killing process with pid 234629 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 234629 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 234629 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:09.627 00:13:09.627 real 0m32.793s 00:13:09.627 user 0m35.798s 00:13:09.627 sys 0m25.872s 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:09.627 19:05:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:09.627 ************************************ 00:13:09.627 END TEST nvmf_vfio_user_fuzz 00:13:09.627 ************************************ 00:13:09.627 19:05:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:09.627 19:05:10 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:09.627 19:05:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:09.627 19:05:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.627 19:05:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:09.627 ************************************ 00:13:09.627 START TEST nvmf_host_management 00:13:09.627 ************************************ 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:09.627 * Looking for test storage... 00:13:09.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:09.627 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:09.628 19:05:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:13.827 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:13.827 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:13.827 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:13.828 Found net devices under 0000:86:00.0: cvl_0_0 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:13.828 Found net devices under 0000:86:00.1: cvl_0_1 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:13.828 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:14.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:13:14.088 00:13:14.088 --- 10.0.0.2 ping statistics --- 00:13:14.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.088 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:13:14.088 00:13:14.088 --- 10.0.0.1 ping statistics --- 00:13:14.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.088 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:14.088 19:05:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:14.089 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:14.089 19:05:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:14.089 19:05:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:14.089 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=243054 00:13:14.089 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 243054 00:13:14.089 19:05:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:14.089 19:05:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 243054 ']' 00:13:14.089 19:05:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.089 19:05:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.089 19:05:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.089 19:05:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.089 19:05:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:14.089 [2024-07-12 19:05:16.582875] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:13:14.089 [2024-07-12 19:05:16.582921] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.089 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.089 [2024-07-12 19:05:16.655560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.348 [2024-07-12 19:05:16.736099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.348 [2024-07-12 19:05:16.736134] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.348 [2024-07-12 19:05:16.736141] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.348 [2024-07-12 19:05:16.736147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.348 [2024-07-12 19:05:16.736152] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.348 [2024-07-12 19:05:16.736269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.348 [2024-07-12 19:05:16.736309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.348 [2024-07-12 19:05:16.736416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.348 [2024-07-12 19:05:16.736416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:14.918 [2024-07-12 19:05:17.434157] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.918 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:14.918 Malloc0 00:13:15.178 [2024-07-12 19:05:17.493712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=243214 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 243214 /var/tmp/bdevperf.sock 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 243214 ']' 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:15.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:15.178 { 00:13:15.178 "params": { 00:13:15.178 "name": "Nvme$subsystem", 00:13:15.178 "trtype": "$TEST_TRANSPORT", 00:13:15.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:15.178 "adrfam": "ipv4", 00:13:15.178 "trsvcid": "$NVMF_PORT", 00:13:15.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:15.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:15.178 "hdgst": ${hdgst:-false}, 00:13:15.178 "ddgst": ${ddgst:-false} 00:13:15.178 }, 00:13:15.178 "method": "bdev_nvme_attach_controller" 00:13:15.178 } 00:13:15.178 EOF 00:13:15.178 )") 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:15.178 19:05:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:15.178 "params": { 00:13:15.178 "name": "Nvme0", 00:13:15.178 "trtype": "tcp", 00:13:15.178 "traddr": "10.0.0.2", 00:13:15.178 "adrfam": "ipv4", 00:13:15.178 "trsvcid": "4420", 00:13:15.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:15.178 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:15.178 "hdgst": false, 00:13:15.178 "ddgst": false 00:13:15.178 }, 00:13:15.178 "method": "bdev_nvme_attach_controller" 00:13:15.178 }' 00:13:15.178 [2024-07-12 19:05:17.583942] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:13:15.178 [2024-07-12 19:05:17.583988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243214 ] 00:13:15.178 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.178 [2024-07-12 19:05:17.649520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.178 [2024-07-12 19:05:17.722627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.438 Running I/O for 10 seconds... 00:13:16.010 19:05:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.010 19:05:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:16.010 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:16.010 19:05:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.011 19:05:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.011 [2024-07-12 19:05:18.465290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.011 [2024-07-12 19:05:18.465333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.465343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.011 [2024-07-12 19:05:18.465350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.465357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.011 [2024-07-12 19:05:18.465363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.465370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.011 [2024-07-12 19:05:18.465377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.465383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c7980 is same with the state(5) to be set 00:13:16.011 [2024-07-12 19:05:18.466080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.011 [2024-07-12 19:05:18.466547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.011 [2024-07-12 19:05:18.466555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.466986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.466994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.467002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.467010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.467018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.467026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.467033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.467042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.467050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.467059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.467066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.467074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.467082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.467090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.467097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.467106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.467113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.467122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:16.012 [2024-07-12 19:05:18.467129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.012 [2024-07-12 19:05:18.467137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d8b20 is same with the state(5) to be set 00:13:16.012 [2024-07-12 19:05:18.467188] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26d8b20 was disconnected and freed. reset controller. 00:13:16.012 [2024-07-12 19:05:18.468123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:16.012 task offset: 122752 on job bdev=Nvme0n1 fails 00:13:16.012 00:13:16.012 Latency(us) 00:13:16.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.012 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:16.012 Job: Nvme0n1 ended in about 0.47 seconds with error 00:13:16.012 Verification LBA range: start 0x0 length 0x400 00:13:16.012 Nvme0n1 : 0.47 1909.21 119.33 136.37 0.00 30517.14 4587.52 27468.13 00:13:16.012 =================================================================================================================== 00:13:16.012 Total : 1909.21 119.33 136.37 0.00 30517.14 4587.52 27468.13 00:13:16.012 [2024-07-12 19:05:18.469706] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:16.012 [2024-07-12 19:05:18.469720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c7980 (9): Bad file descriptor 00:13:16.013 19:05:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.013 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:16.013 19:05:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.013 19:05:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.013 [2024-07-12 19:05:18.476614] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:13:16.013 [2024-07-12 19:05:18.476703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:16.013 [2024-07-12 19:05:18.476726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.013 [2024-07-12 19:05:18.476742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:13:16.013 [2024-07-12 19:05:18.476749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:13:16.013 [2024-07-12 19:05:18.476756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:13:16.013 [2024-07-12 19:05:18.476762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22c7980 00:13:16.013 [2024-07-12 19:05:18.476780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c7980 (9): Bad file descriptor 00:13:16.013 [2024-07-12 19:05:18.476790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:13:16.013 [2024-07-12 19:05:18.476797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:13:16.013 [2024-07-12 19:05:18.476805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:16.013 [2024-07-12 19:05:18.476816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:16.013 19:05:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.013 19:05:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:16.952 19:05:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 243214 00:13:16.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (243214) - No such process 00:13:16.952 19:05:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:16.952 19:05:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:16.952 19:05:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:16.952 19:05:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:16.952 19:05:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:16.952 19:05:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:16.952 19:05:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:16.952 19:05:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:16.952 { 00:13:16.952 "params": { 00:13:16.952 "name": "Nvme$subsystem", 00:13:16.952 "trtype": "$TEST_TRANSPORT", 00:13:16.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:16.952 "adrfam": "ipv4", 00:13:16.952 "trsvcid": "$NVMF_PORT", 00:13:16.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:16.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:16.952 "hdgst": ${hdgst:-false}, 00:13:16.952 "ddgst": ${ddgst:-false} 00:13:16.952 }, 00:13:16.952 "method": "bdev_nvme_attach_controller" 00:13:16.952 } 00:13:16.952 EOF 00:13:16.952 )") 00:13:16.952 19:05:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:16.952 19:05:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:16.952 19:05:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:16.952 19:05:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:16.952 "params": { 00:13:16.952 "name": "Nvme0", 00:13:16.952 "trtype": "tcp", 00:13:16.952 "traddr": "10.0.0.2", 00:13:16.952 "adrfam": "ipv4", 00:13:16.952 "trsvcid": "4420", 00:13:16.952 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:16.952 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:16.952 "hdgst": false, 00:13:16.952 "ddgst": false 00:13:16.952 }, 00:13:16.952 "method": "bdev_nvme_attach_controller" 00:13:16.952 }' 00:13:17.212 [2024-07-12 19:05:19.537150] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:13:17.212 [2024-07-12 19:05:19.537197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243669 ] 00:13:17.212 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.212 [2024-07-12 19:05:19.605783] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.212 [2024-07-12 19:05:19.679267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.471 Running I/O for 1 seconds... 00:13:18.410 00:13:18.410 Latency(us) 00:13:18.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.410 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:18.410 Verification LBA range: start 0x0 length 0x400 00:13:18.410 Nvme0n1 : 1.01 2048.54 128.03 0.00 0.00 30642.49 1773.75 27354.16 00:13:18.410 =================================================================================================================== 00:13:18.410 Total : 2048.54 128.03 0.00 0.00 30642.49 1773.75 27354.16 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.670 rmmod nvme_tcp 00:13:18.670 rmmod nvme_fabrics 00:13:18.670 rmmod nvme_keyring 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 243054 ']' 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 243054 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 243054 ']' 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 243054 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 243054 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 243054' 00:13:18.670 killing process with pid 243054 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 243054 00:13:18.670 19:05:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 243054 00:13:18.930 [2024-07-12 19:05:21.373659] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:18.930 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:18.930 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:18.930 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:18.930 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.930 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:18.930 19:05:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.930 19:05:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.930 19:05:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.472 19:05:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:21.472 19:05:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:21.472 00:13:21.472 real 0m12.811s 00:13:21.472 user 0m22.726s 00:13:21.472 sys 0m5.413s 00:13:21.472 19:05:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:21.472 19:05:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:21.472 ************************************ 00:13:21.472 END TEST nvmf_host_management 00:13:21.472 ************************************ 00:13:21.472 19:05:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:21.472 19:05:23 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:21.472 19:05:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:21.472 19:05:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:21.472 19:05:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:21.472 ************************************ 00:13:21.472 START TEST nvmf_lvol 00:13:21.472 ************************************ 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:21.472 * Looking for test storage... 00:13:21.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:21.472 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:21.473 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.473 19:05:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.473 19:05:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.473 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:21.473 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:21.473 19:05:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:21.473 19:05:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:26.764 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.764 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:26.765 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:26.765 Found net devices under 0000:86:00.0: cvl_0_0 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:26.765 Found net devices under 0000:86:00.1: cvl_0_1 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:26.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:13:26.765 00:13:26.765 --- 10.0.0.2 ping statistics --- 00:13:26.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.765 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:13:26.765 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:13:27.025 00:13:27.025 --- 10.0.0.1 ping statistics --- 00:13:27.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.025 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=247397 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 247397 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 247397 ']' 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.025 19:05:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:27.025 [2024-07-12 19:05:29.430416] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:13:27.025 [2024-07-12 19:05:29.430462] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.025 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.025 [2024-07-12 19:05:29.502664] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:27.025 [2024-07-12 19:05:29.581978] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.025 [2024-07-12 19:05:29.582011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.025 [2024-07-12 19:05:29.582018] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.025 [2024-07-12 19:05:29.582023] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.025 [2024-07-12 19:05:29.582028] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.025 [2024-07-12 19:05:29.582101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.025 [2024-07-12 19:05:29.582228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.025 [2024-07-12 19:05:29.582241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.963 19:05:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.963 19:05:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:27.963 19:05:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:27.963 19:05:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:27.963 19:05:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:27.963 19:05:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.963 19:05:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:27.963 [2024-07-12 19:05:30.427458] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.963 19:05:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:28.222 19:05:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:28.222 19:05:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:28.481 19:05:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:28.481 19:05:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:28.739 19:05:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:28.739 19:05:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c5bcd7c7-79e3-4774-bd07-e3272f20ce82 00:13:28.739 19:05:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c5bcd7c7-79e3-4774-bd07-e3272f20ce82 lvol 20 00:13:28.998 19:05:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7500bf85-20f0-4ea8-b0ec-3e95977548b8 00:13:28.998 19:05:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:29.257 19:05:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7500bf85-20f0-4ea8-b0ec-3e95977548b8 00:13:29.257 19:05:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:29.514 [2024-07-12 19:05:31.973943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.514 19:05:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:29.773 19:05:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=247924 00:13:29.773 19:05:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:29.773 19:05:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:29.773 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.708 19:05:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7500bf85-20f0-4ea8-b0ec-3e95977548b8 MY_SNAPSHOT 00:13:30.966 19:05:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=149827e1-ec4d-40f8-9748-268fc38b4cc5 00:13:30.966 19:05:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7500bf85-20f0-4ea8-b0ec-3e95977548b8 30 00:13:31.224 19:05:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 149827e1-ec4d-40f8-9748-268fc38b4cc5 MY_CLONE 00:13:31.483 19:05:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3a64eca9-28f8-45e1-8d7c-c383004fce2c 00:13:31.483 19:05:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3a64eca9-28f8-45e1-8d7c-c383004fce2c 00:13:32.049 19:05:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 247924 00:13:40.172 Initializing NVMe Controllers 00:13:40.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:40.172 Controller IO queue size 128, less than required. 00:13:40.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:40.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:40.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:40.172 Initialization complete. Launching workers. 00:13:40.172 ======================================================== 00:13:40.172 Latency(us) 00:13:40.172 Device Information : IOPS MiB/s Average min max 00:13:40.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12480.80 48.75 10257.60 2117.70 53787.29 00:13:40.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12593.30 49.19 10163.92 3844.73 56342.72 00:13:40.172 ======================================================== 00:13:40.172 Total : 25074.10 97.95 10210.55 2117.70 56342.72 00:13:40.172 00:13:40.172 19:05:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:40.172 19:05:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7500bf85-20f0-4ea8-b0ec-3e95977548b8 00:13:40.431 19:05:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c5bcd7c7-79e3-4774-bd07-e3272f20ce82 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:40.689 rmmod nvme_tcp 00:13:40.689 rmmod nvme_fabrics 00:13:40.689 rmmod nvme_keyring 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 247397 ']' 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 247397 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 247397 ']' 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 247397 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 247397 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 247397' 00:13:40.689 killing process with pid 247397 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 247397 00:13:40.689 19:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 247397 00:13:40.948 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:40.948 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:40.948 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:40.948 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:40.948 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:40.948 19:05:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.948 19:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.948 19:05:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:43.488 00:13:43.488 real 0m21.947s 00:13:43.488 user 1m4.186s 00:13:43.488 sys 0m7.079s 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:43.488 ************************************ 00:13:43.488 END TEST nvmf_lvol 00:13:43.488 ************************************ 00:13:43.488 19:05:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:43.488 19:05:45 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:43.488 19:05:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:43.488 19:05:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.488 19:05:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:43.488 ************************************ 00:13:43.488 START TEST nvmf_lvs_grow 00:13:43.488 ************************************ 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:43.488 * Looking for test storage... 00:13:43.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:43.488 19:05:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:48.770 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:48.771 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:48.771 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:48.771 Found net devices under 0000:86:00.0: cvl_0_0 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:48.771 Found net devices under 0000:86:00.1: cvl_0_1 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:48.771 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:49.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:13:49.031 00:13:49.031 --- 10.0.0.2 ping statistics --- 00:13:49.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.031 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:49.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:13:49.031 00:13:49.031 --- 10.0.0.1 ping statistics --- 00:13:49.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.031 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=253078 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 253078 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 253078 ']' 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:49.031 19:05:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:49.031 [2024-07-12 19:05:51.493904] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:13:49.031 [2024-07-12 19:05:51.493945] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.031 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.031 [2024-07-12 19:05:51.562722] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.291 [2024-07-12 19:05:51.640826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.291 [2024-07-12 19:05:51.640859] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.291 [2024-07-12 19:05:51.640867] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.291 [2024-07-12 19:05:51.640873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.291 [2024-07-12 19:05:51.640878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.291 [2024-07-12 19:05:51.640894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.861 19:05:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.861 19:05:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:49.861 19:05:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:49.861 19:05:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:49.861 19:05:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:49.861 19:05:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.861 19:05:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:50.121 [2024-07-12 19:05:52.481388] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.121 19:05:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:50.121 19:05:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:50.121 19:05:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.121 19:05:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:50.121 ************************************ 00:13:50.121 START TEST lvs_grow_clean 00:13:50.121 ************************************ 00:13:50.121 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:50.121 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:50.121 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:50.121 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:50.121 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:50.121 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:50.121 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:50.121 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:50.121 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:50.121 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:50.380 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:50.380 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:50.380 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0703c0ac-cad9-4b6a-bd65-fa36b791ff77 00:13:50.380 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0703c0ac-cad9-4b6a-bd65-fa36b791ff77 00:13:50.640 19:05:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:50.640 19:05:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:50.640 19:05:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:50.640 19:05:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0703c0ac-cad9-4b6a-bd65-fa36b791ff77 lvol 150 00:13:50.899 19:05:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=884ccb35-479e-4a39-b4c9-6178a7da68a8 00:13:50.899 19:05:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:50.899 19:05:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:50.899 [2024-07-12 19:05:53.447968] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:50.899 [2024-07-12 19:05:53.448016] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:50.899 true 00:13:50.899 19:05:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0703c0ac-cad9-4b6a-bd65-fa36b791ff77 00:13:50.899 19:05:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:51.158 19:05:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:51.158 19:05:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:51.417 19:05:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 884ccb35-479e-4a39-b4c9-6178a7da68a8 00:13:51.417 19:05:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:51.676 [2024-07-12 19:05:54.121966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.676 19:05:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:51.936 19:05:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=253585 00:13:51.936 19:05:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:51.936 19:05:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:51.936 19:05:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 253585 /var/tmp/bdevperf.sock 00:13:51.936 19:05:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 253585 ']' 00:13:51.936 19:05:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:51.936 19:05:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.936 19:05:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:51.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:51.936 19:05:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.936 19:05:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:51.936 [2024-07-12 19:05:54.334779] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:13:51.936 [2024-07-12 19:05:54.334828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid253585 ] 00:13:51.936 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.936 [2024-07-12 19:05:54.402222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.936 [2024-07-12 19:05:54.479403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.876 19:05:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.876 19:05:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:52.876 19:05:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:53.136 Nvme0n1 00:13:53.136 19:05:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:53.395 [ 00:13:53.395 { 00:13:53.395 "name": "Nvme0n1", 00:13:53.395 "aliases": [ 00:13:53.395 "884ccb35-479e-4a39-b4c9-6178a7da68a8" 00:13:53.395 ], 00:13:53.395 "product_name": "NVMe disk", 00:13:53.395 "block_size": 4096, 00:13:53.395 "num_blocks": 38912, 00:13:53.395 "uuid": "884ccb35-479e-4a39-b4c9-6178a7da68a8", 00:13:53.395 "assigned_rate_limits": { 00:13:53.395 "rw_ios_per_sec": 0, 00:13:53.395 "rw_mbytes_per_sec": 0, 00:13:53.395 "r_mbytes_per_sec": 0, 00:13:53.395 "w_mbytes_per_sec": 0 00:13:53.395 }, 00:13:53.395 "claimed": false, 00:13:53.395 "zoned": false, 00:13:53.395 "supported_io_types": { 00:13:53.395 "read": true, 00:13:53.395 "write": true, 00:13:53.395 "unmap": true, 00:13:53.395 "flush": true, 00:13:53.395 "reset": true, 00:13:53.395 "nvme_admin": true, 00:13:53.395 "nvme_io": true, 00:13:53.395 "nvme_io_md": false, 00:13:53.395 "write_zeroes": true, 00:13:53.395 "zcopy": false, 00:13:53.395 "get_zone_info": false, 00:13:53.395 "zone_management": false, 00:13:53.395 "zone_append": false, 00:13:53.395 "compare": true, 00:13:53.395 "compare_and_write": true, 00:13:53.395 "abort": true, 00:13:53.395 "seek_hole": false, 00:13:53.395 "seek_data": false, 00:13:53.395 "copy": true, 00:13:53.395 "nvme_iov_md": false 00:13:53.395 }, 00:13:53.395 "memory_domains": [ 00:13:53.396 { 00:13:53.396 "dma_device_id": "system", 00:13:53.396 "dma_device_type": 1 00:13:53.396 } 00:13:53.396 ], 00:13:53.396 "driver_specific": { 00:13:53.396 "nvme": [ 00:13:53.396 { 00:13:53.396 "trid": { 00:13:53.396 "trtype": "TCP", 00:13:53.396 "adrfam": "IPv4", 00:13:53.396 "traddr": "10.0.0.2", 00:13:53.396 "trsvcid": "4420", 00:13:53.396 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:53.396 }, 00:13:53.396 "ctrlr_data": { 00:13:53.396 "cntlid": 1, 00:13:53.396 "vendor_id": "0x8086", 00:13:53.396 "model_number": "SPDK bdev Controller", 00:13:53.396 "serial_number": "SPDK0", 00:13:53.396 "firmware_revision": "24.09", 00:13:53.396 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:53.396 "oacs": { 00:13:53.396 "security": 0, 00:13:53.396 "format": 0, 00:13:53.396 "firmware": 0, 00:13:53.396 "ns_manage": 0 00:13:53.396 }, 00:13:53.396 "multi_ctrlr": true, 00:13:53.396 "ana_reporting": false 00:13:53.396 }, 00:13:53.396 "vs": { 00:13:53.396 "nvme_version": "1.3" 00:13:53.396 }, 00:13:53.396 "ns_data": { 00:13:53.396 "id": 1, 00:13:53.396 "can_share": true 00:13:53.396 } 00:13:53.396 } 00:13:53.396 ], 00:13:53.396 "mp_policy": "active_passive" 00:13:53.396 } 00:13:53.396 } 00:13:53.396 ] 00:13:53.396 19:05:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=253826 00:13:53.396 19:05:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:53.396 19:05:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:53.396 Running I/O for 10 seconds... 00:13:54.333 Latency(us) 00:13:54.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.333 Nvme0n1 : 1.00 23090.00 90.20 0.00 0.00 0.00 0.00 0.00 00:13:54.333 =================================================================================================================== 00:13:54.333 Total : 23090.00 90.20 0.00 0.00 0.00 0.00 0.00 00:13:54.333 00:13:55.271 19:05:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0703c0ac-cad9-4b6a-bd65-fa36b791ff77 00:13:55.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.271 Nvme0n1 : 2.00 23229.00 90.74 0.00 0.00 0.00 0.00 0.00 00:13:55.271 =================================================================================================================== 00:13:55.271 Total : 23229.00 90.74 0.00 0.00 0.00 0.00 0.00 00:13:55.271 00:13:55.532 true 00:13:55.532 19:05:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0703c0ac-cad9-4b6a-bd65-fa36b791ff77 00:13:55.532 19:05:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:55.792 19:05:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:55.792 19:05:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:55.792 19:05:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 253826 00:13:56.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.361 Nvme0n1 : 3.00 23280.67 90.94 0.00 0.00 0.00 0.00 0.00 00:13:56.361 =================================================================================================================== 00:13:56.361 Total : 23280.67 90.94 0.00 0.00 0.00 0.00 0.00 00:13:56.361 00:13:57.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.299 Nvme0n1 : 4.00 23367.25 91.28 0.00 0.00 0.00 0.00 0.00 00:13:57.299 =================================================================================================================== 00:13:57.299 Total : 23367.25 91.28 0.00 0.00 0.00 0.00 0.00 00:13:57.299 00:13:58.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.678 Nvme0n1 : 5.00 23324.80 91.11 0.00 0.00 0.00 0.00 0.00 00:13:58.678 =================================================================================================================== 00:13:58.678 Total : 23324.80 91.11 0.00 0.00 0.00 0.00 0.00 00:13:58.678 00:13:59.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:59.276 Nvme0n1 : 6.00 23365.50 91.27 0.00 0.00 0.00 0.00 0.00 00:13:59.276 =================================================================================================================== 00:13:59.276 Total : 23365.50 91.27 0.00 0.00 0.00 0.00 0.00 00:13:59.276 00:14:00.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:00.652 Nvme0n1 : 7.00 23376.71 91.32 0.00 0.00 0.00 0.00 0.00 00:14:00.652 =================================================================================================================== 00:14:00.652 Total : 23376.71 91.32 0.00 0.00 0.00 0.00 0.00 00:14:00.652 00:14:01.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.589 Nvme0n1 : 8.00 23407.62 91.44 0.00 0.00 0.00 0.00 0.00 00:14:01.589 =================================================================================================================== 00:14:01.589 Total : 23407.62 91.44 0.00 0.00 0.00 0.00 0.00 00:14:01.589 00:14:02.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:02.525 Nvme0n1 : 9.00 23424.56 91.50 0.00 0.00 0.00 0.00 0.00 00:14:02.525 =================================================================================================================== 00:14:02.525 Total : 23424.56 91.50 0.00 0.00 0.00 0.00 0.00 00:14:02.525 00:14:03.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.461 Nvme0n1 : 10.00 23450.80 91.60 0.00 0.00 0.00 0.00 0.00 00:14:03.461 =================================================================================================================== 00:14:03.461 Total : 23450.80 91.60 0.00 0.00 0.00 0.00 0.00 00:14:03.461 00:14:03.461 00:14:03.461 Latency(us) 00:14:03.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.461 Nvme0n1 : 10.00 23444.79 91.58 0.00 0.00 5455.94 3205.57 10941.66 00:14:03.461 =================================================================================================================== 00:14:03.461 Total : 23444.79 91.58 0.00 0.00 5455.94 3205.57 10941.66 00:14:03.461 0 00:14:03.461 19:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 253585 00:14:03.461 19:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 253585 ']' 00:14:03.461 19:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 253585 00:14:03.461 19:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:03.461 19:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:03.461 19:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 253585 00:14:03.461 19:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:03.461 19:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:03.461 19:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 253585' 00:14:03.461 killing process with pid 253585 00:14:03.461 19:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 253585 00:14:03.461 Received shutdown signal, test time was about 10.000000 seconds 00:14:03.461 00:14:03.461 Latency(us) 00:14:03.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.461 =================================================================================================================== 00:14:03.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:03.461 19:06:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 253585 00:14:03.721 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:03.721 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:03.980 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0703c0ac-cad9-4b6a-bd65-fa36b791ff77 00:14:03.980 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:04.240 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:04.240 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:04.240 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:04.240 [2024-07-12 19:06:06.774784] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0703c0ac-cad9-4b6a-bd65-fa36b791ff77 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0703c0ac-cad9-4b6a-bd65-fa36b791ff77 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0703c0ac-cad9-4b6a-bd65-fa36b791ff77 00:14:04.499 request: 00:14:04.499 { 00:14:04.499 "uuid": "0703c0ac-cad9-4b6a-bd65-fa36b791ff77", 00:14:04.499 "method": "bdev_lvol_get_lvstores", 00:14:04.499 "req_id": 1 00:14:04.499 } 00:14:04.499 Got JSON-RPC error response 00:14:04.499 response: 00:14:04.499 { 00:14:04.499 "code": -19, 00:14:04.499 "message": "No such device" 00:14:04.499 } 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:04.499 19:06:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:04.759 aio_bdev 00:14:04.759 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 884ccb35-479e-4a39-b4c9-6178a7da68a8 00:14:04.759 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=884ccb35-479e-4a39-b4c9-6178a7da68a8 00:14:04.759 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:04.759 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:04.759 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:04.759 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:04.759 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:05.018 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 884ccb35-479e-4a39-b4c9-6178a7da68a8 -t 2000 00:14:05.018 [ 00:14:05.018 { 00:14:05.018 "name": "884ccb35-479e-4a39-b4c9-6178a7da68a8", 00:14:05.018 "aliases": [ 00:14:05.018 "lvs/lvol" 00:14:05.018 ], 00:14:05.018 "product_name": "Logical Volume", 00:14:05.018 "block_size": 4096, 00:14:05.018 "num_blocks": 38912, 00:14:05.018 "uuid": "884ccb35-479e-4a39-b4c9-6178a7da68a8", 00:14:05.018 "assigned_rate_limits": { 00:14:05.018 "rw_ios_per_sec": 0, 00:14:05.018 "rw_mbytes_per_sec": 0, 00:14:05.018 "r_mbytes_per_sec": 0, 00:14:05.018 "w_mbytes_per_sec": 0 00:14:05.018 }, 00:14:05.018 "claimed": false, 00:14:05.018 "zoned": false, 00:14:05.018 "supported_io_types": { 00:14:05.018 "read": true, 00:14:05.018 "write": true, 00:14:05.018 "unmap": true, 00:14:05.018 "flush": false, 00:14:05.018 "reset": true, 00:14:05.018 "nvme_admin": false, 00:14:05.018 "nvme_io": false, 00:14:05.018 "nvme_io_md": false, 00:14:05.018 "write_zeroes": true, 00:14:05.018 "zcopy": false, 00:14:05.018 "get_zone_info": false, 00:14:05.018 "zone_management": false, 00:14:05.018 "zone_append": false, 00:14:05.018 "compare": false, 00:14:05.018 "compare_and_write": false, 00:14:05.018 "abort": false, 00:14:05.018 "seek_hole": true, 00:14:05.018 "seek_data": true, 00:14:05.018 "copy": false, 00:14:05.018 "nvme_iov_md": false 00:14:05.018 }, 00:14:05.018 "driver_specific": { 00:14:05.018 "lvol": { 00:14:05.018 "lvol_store_uuid": "0703c0ac-cad9-4b6a-bd65-fa36b791ff77", 00:14:05.018 "base_bdev": "aio_bdev", 00:14:05.018 "thin_provision": false, 00:14:05.018 "num_allocated_clusters": 38, 00:14:05.018 "snapshot": false, 00:14:05.018 "clone": false, 00:14:05.018 "esnap_clone": false 00:14:05.018 } 00:14:05.018 } 00:14:05.018 } 00:14:05.018 ] 00:14:05.018 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:05.018 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0703c0ac-cad9-4b6a-bd65-fa36b791ff77 00:14:05.018 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:05.293 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:05.293 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0703c0ac-cad9-4b6a-bd65-fa36b791ff77 00:14:05.293 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:05.293 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:05.293 19:06:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 884ccb35-479e-4a39-b4c9-6178a7da68a8 00:14:05.551 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0703c0ac-cad9-4b6a-bd65-fa36b791ff77 00:14:05.811 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:05.811 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:06.070 00:14:06.070 real 0m15.859s 00:14:06.070 user 0m15.633s 00:14:06.070 sys 0m1.375s 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:06.070 ************************************ 00:14:06.070 END TEST lvs_grow_clean 00:14:06.070 ************************************ 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:06.070 ************************************ 00:14:06.070 START TEST lvs_grow_dirty 00:14:06.070 ************************************ 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:06.070 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:06.328 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:06.328 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:06.328 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7da00427-4f87-435e-8d30-20b94608a1ee 00:14:06.328 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7da00427-4f87-435e-8d30-20b94608a1ee 00:14:06.328 19:06:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:06.587 19:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:06.587 19:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:06.587 19:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7da00427-4f87-435e-8d30-20b94608a1ee lvol 150 00:14:06.847 19:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c 00:14:06.847 19:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:06.847 19:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:06.847 [2024-07-12 19:06:09.384683] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:06.847 [2024-07-12 19:06:09.384729] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:06.847 true 00:14:06.847 19:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:06.847 19:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7da00427-4f87-435e-8d30-20b94608a1ee 00:14:07.107 19:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:07.107 19:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:07.366 19:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c 00:14:07.366 19:06:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:07.626 [2024-07-12 19:06:10.062693] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.626 19:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:07.886 19:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=256860 00:14:07.886 19:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:07.886 19:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:07.886 19:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 256860 /var/tmp/bdevperf.sock 00:14:07.886 19:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 256860 ']' 00:14:07.886 19:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:07.886 19:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.886 19:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:07.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:07.886 19:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.886 19:06:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:07.886 [2024-07-12 19:06:10.292482] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:14:07.886 [2024-07-12 19:06:10.292531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid256860 ] 00:14:07.886 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.886 [2024-07-12 19:06:10.360726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.886 [2024-07-12 19:06:10.440720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.824 19:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.824 19:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:08.824 19:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:09.084 Nvme0n1 00:14:09.084 19:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:09.343 [ 00:14:09.343 { 00:14:09.343 "name": "Nvme0n1", 00:14:09.343 "aliases": [ 00:14:09.343 "b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c" 00:14:09.343 ], 00:14:09.343 "product_name": "NVMe disk", 00:14:09.343 "block_size": 4096, 00:14:09.343 "num_blocks": 38912, 00:14:09.343 "uuid": "b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c", 00:14:09.343 "assigned_rate_limits": { 00:14:09.343 "rw_ios_per_sec": 0, 00:14:09.343 "rw_mbytes_per_sec": 0, 00:14:09.343 "r_mbytes_per_sec": 0, 00:14:09.343 "w_mbytes_per_sec": 0 00:14:09.343 }, 00:14:09.343 "claimed": false, 00:14:09.343 "zoned": false, 00:14:09.343 "supported_io_types": { 00:14:09.343 "read": true, 00:14:09.343 "write": true, 00:14:09.343 "unmap": true, 00:14:09.343 "flush": true, 00:14:09.343 "reset": true, 00:14:09.343 "nvme_admin": true, 00:14:09.343 "nvme_io": true, 00:14:09.343 "nvme_io_md": false, 00:14:09.343 "write_zeroes": true, 00:14:09.343 "zcopy": false, 00:14:09.343 "get_zone_info": false, 00:14:09.343 "zone_management": false, 00:14:09.343 "zone_append": false, 00:14:09.343 "compare": true, 00:14:09.343 "compare_and_write": true, 00:14:09.343 "abort": true, 00:14:09.343 "seek_hole": false, 00:14:09.343 "seek_data": false, 00:14:09.343 "copy": true, 00:14:09.343 "nvme_iov_md": false 00:14:09.343 }, 00:14:09.343 "memory_domains": [ 00:14:09.343 { 00:14:09.343 "dma_device_id": "system", 00:14:09.343 "dma_device_type": 1 00:14:09.343 } 00:14:09.343 ], 00:14:09.343 "driver_specific": { 00:14:09.343 "nvme": [ 00:14:09.343 { 00:14:09.343 "trid": { 00:14:09.343 "trtype": "TCP", 00:14:09.343 "adrfam": "IPv4", 00:14:09.343 "traddr": "10.0.0.2", 00:14:09.343 "trsvcid": "4420", 00:14:09.343 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:09.343 }, 00:14:09.343 "ctrlr_data": { 00:14:09.343 "cntlid": 1, 00:14:09.343 "vendor_id": "0x8086", 00:14:09.343 "model_number": "SPDK bdev Controller", 00:14:09.343 "serial_number": "SPDK0", 00:14:09.343 "firmware_revision": "24.09", 00:14:09.343 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:09.343 "oacs": { 00:14:09.343 "security": 0, 00:14:09.343 "format": 0, 00:14:09.343 "firmware": 0, 00:14:09.343 "ns_manage": 0 00:14:09.343 }, 00:14:09.343 "multi_ctrlr": true, 00:14:09.343 "ana_reporting": false 00:14:09.343 }, 00:14:09.343 "vs": { 00:14:09.343 "nvme_version": "1.3" 00:14:09.343 }, 00:14:09.343 "ns_data": { 00:14:09.343 "id": 1, 00:14:09.343 "can_share": true 00:14:09.343 } 00:14:09.343 } 00:14:09.343 ], 00:14:09.343 "mp_policy": "active_passive" 00:14:09.343 } 00:14:09.343 } 00:14:09.343 ] 00:14:09.343 19:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=257142 00:14:09.343 19:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:09.343 19:06:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:09.343 Running I/O for 10 seconds... 00:14:10.281 Latency(us) 00:14:10.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.281 Nvme0n1 : 1.00 23115.00 90.29 0.00 0.00 0.00 0.00 0.00 00:14:10.281 =================================================================================================================== 00:14:10.281 Total : 23115.00 90.29 0.00 0.00 0.00 0.00 0.00 00:14:10.281 00:14:11.218 19:06:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7da00427-4f87-435e-8d30-20b94608a1ee 00:14:11.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.218 Nvme0n1 : 2.00 23337.50 91.16 0.00 0.00 0.00 0.00 0.00 00:14:11.218 =================================================================================================================== 00:14:11.218 Total : 23337.50 91.16 0.00 0.00 0.00 0.00 0.00 00:14:11.218 00:14:11.477 true 00:14:11.478 19:06:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7da00427-4f87-435e-8d30-20b94608a1ee 00:14:11.478 19:06:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:11.736 19:06:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:11.736 19:06:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:11.736 19:06:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 257142 00:14:12.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.305 Nvme0n1 : 3.00 23347.67 91.20 0.00 0.00 0.00 0.00 0.00 00:14:12.305 =================================================================================================================== 00:14:12.305 Total : 23347.67 91.20 0.00 0.00 0.00 0.00 0.00 00:14:12.305 00:14:13.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.243 Nvme0n1 : 4.00 23400.75 91.41 0.00 0.00 0.00 0.00 0.00 00:14:13.243 =================================================================================================================== 00:14:13.243 Total : 23400.75 91.41 0.00 0.00 0.00 0.00 0.00 00:14:13.243 00:14:14.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.624 Nvme0n1 : 5.00 23458.00 91.63 0.00 0.00 0.00 0.00 0.00 00:14:14.624 =================================================================================================================== 00:14:14.624 Total : 23458.00 91.63 0.00 0.00 0.00 0.00 0.00 00:14:14.624 00:14:15.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.562 Nvme0n1 : 6.00 23487.50 91.75 0.00 0.00 0.00 0.00 0.00 00:14:15.562 =================================================================================================================== 00:14:15.562 Total : 23487.50 91.75 0.00 0.00 0.00 0.00 0.00 00:14:15.562 00:14:16.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.503 Nvme0n1 : 7.00 23515.86 91.86 0.00 0.00 0.00 0.00 0.00 00:14:16.503 =================================================================================================================== 00:14:16.503 Total : 23515.86 91.86 0.00 0.00 0.00 0.00 0.00 00:14:16.503 00:14:17.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.441 Nvme0n1 : 8.00 23538.50 91.95 0.00 0.00 0.00 0.00 0.00 00:14:17.441 =================================================================================================================== 00:14:17.441 Total : 23538.50 91.95 0.00 0.00 0.00 0.00 0.00 00:14:17.441 00:14:18.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.380 Nvme0n1 : 9.00 23506.89 91.82 0.00 0.00 0.00 0.00 0.00 00:14:18.380 =================================================================================================================== 00:14:18.380 Total : 23506.89 91.82 0.00 0.00 0.00 0.00 0.00 00:14:18.380 00:14:19.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.326 Nvme0n1 : 10.00 23528.80 91.91 0.00 0.00 0.00 0.00 0.00 00:14:19.326 =================================================================================================================== 00:14:19.326 Total : 23528.80 91.91 0.00 0.00 0.00 0.00 0.00 00:14:19.326 00:14:19.326 00:14:19.326 Latency(us) 00:14:19.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.326 Nvme0n1 : 10.00 23530.97 91.92 0.00 0.00 5436.31 3205.57 15272.74 00:14:19.326 =================================================================================================================== 00:14:19.326 Total : 23530.97 91.92 0.00 0.00 5436.31 3205.57 15272.74 00:14:19.326 0 00:14:19.326 19:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 256860 00:14:19.326 19:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 256860 ']' 00:14:19.326 19:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 256860 00:14:19.326 19:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:19.326 19:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:19.326 19:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 256860 00:14:19.326 19:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:19.326 19:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:19.326 19:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 256860' 00:14:19.326 killing process with pid 256860 00:14:19.326 19:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 256860 00:14:19.326 Received shutdown signal, test time was about 10.000000 seconds 00:14:19.326 00:14:19.326 Latency(us) 00:14:19.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.326 =================================================================================================================== 00:14:19.326 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:19.326 19:06:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 256860 00:14:19.585 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:19.845 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7da00427-4f87-435e-8d30-20b94608a1ee 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 253078 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 253078 00:14:20.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 253078 Killed "${NVMF_APP[@]}" "$@" 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=258980 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 258980 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 258980 ']' 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.104 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.105 19:06:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:20.364 [2024-07-12 19:06:22.706941] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:14:20.364 [2024-07-12 19:06:22.706988] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.364 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.364 [2024-07-12 19:06:22.776021] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.364 [2024-07-12 19:06:22.854519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.364 [2024-07-12 19:06:22.854551] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.364 [2024-07-12 19:06:22.854558] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.364 [2024-07-12 19:06:22.854564] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.364 [2024-07-12 19:06:22.854569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.364 [2024-07-12 19:06:22.854588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:21.302 [2024-07-12 19:06:23.704003] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:21.302 [2024-07-12 19:06:23.704089] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:21.302 [2024-07-12 19:06:23.704115] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:21.302 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:21.562 19:06:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c -t 2000 00:14:21.562 [ 00:14:21.562 { 00:14:21.562 "name": "b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c", 00:14:21.562 "aliases": [ 00:14:21.562 "lvs/lvol" 00:14:21.562 ], 00:14:21.562 "product_name": "Logical Volume", 00:14:21.562 "block_size": 4096, 00:14:21.562 "num_blocks": 38912, 00:14:21.562 "uuid": "b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c", 00:14:21.562 "assigned_rate_limits": { 00:14:21.562 "rw_ios_per_sec": 0, 00:14:21.562 "rw_mbytes_per_sec": 0, 00:14:21.562 "r_mbytes_per_sec": 0, 00:14:21.562 "w_mbytes_per_sec": 0 00:14:21.562 }, 00:14:21.562 "claimed": false, 00:14:21.562 "zoned": false, 00:14:21.562 "supported_io_types": { 00:14:21.562 "read": true, 00:14:21.562 "write": true, 00:14:21.562 "unmap": true, 00:14:21.562 "flush": false, 00:14:21.562 "reset": true, 00:14:21.562 "nvme_admin": false, 00:14:21.562 "nvme_io": false, 00:14:21.562 "nvme_io_md": false, 00:14:21.562 "write_zeroes": true, 00:14:21.562 "zcopy": false, 00:14:21.562 "get_zone_info": false, 00:14:21.562 "zone_management": false, 00:14:21.562 "zone_append": false, 00:14:21.562 "compare": false, 00:14:21.562 "compare_and_write": false, 00:14:21.562 "abort": false, 00:14:21.562 "seek_hole": true, 00:14:21.562 "seek_data": true, 00:14:21.562 "copy": false, 00:14:21.562 "nvme_iov_md": false 00:14:21.562 }, 00:14:21.562 "driver_specific": { 00:14:21.562 "lvol": { 00:14:21.562 "lvol_store_uuid": "7da00427-4f87-435e-8d30-20b94608a1ee", 00:14:21.562 "base_bdev": "aio_bdev", 00:14:21.562 "thin_provision": false, 00:14:21.562 "num_allocated_clusters": 38, 00:14:21.562 "snapshot": false, 00:14:21.562 "clone": false, 00:14:21.562 "esnap_clone": false 00:14:21.562 } 00:14:21.562 } 00:14:21.562 } 00:14:21.562 ] 00:14:21.562 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:21.563 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7da00427-4f87-435e-8d30-20b94608a1ee 00:14:21.563 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:21.822 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:21.822 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7da00427-4f87-435e-8d30-20b94608a1ee 00:14:21.822 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:22.081 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:22.081 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:22.081 [2024-07-12 19:06:24.584655] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:22.081 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7da00427-4f87-435e-8d30-20b94608a1ee 00:14:22.081 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:22.081 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7da00427-4f87-435e-8d30-20b94608a1ee 00:14:22.081 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.081 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.081 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.081 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.081 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.081 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.081 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.081 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:22.081 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7da00427-4f87-435e-8d30-20b94608a1ee 00:14:22.341 request: 00:14:22.341 { 00:14:22.341 "uuid": "7da00427-4f87-435e-8d30-20b94608a1ee", 00:14:22.341 "method": "bdev_lvol_get_lvstores", 00:14:22.341 "req_id": 1 00:14:22.341 } 00:14:22.341 Got JSON-RPC error response 00:14:22.341 response: 00:14:22.341 { 00:14:22.341 "code": -19, 00:14:22.341 "message": "No such device" 00:14:22.341 } 00:14:22.341 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:22.341 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:22.341 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:22.341 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:22.341 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:22.601 aio_bdev 00:14:22.601 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c 00:14:22.601 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c 00:14:22.601 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:22.601 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:22.601 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:22.601 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:22.601 19:06:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:22.601 19:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c -t 2000 00:14:22.861 [ 00:14:22.861 { 00:14:22.861 "name": "b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c", 00:14:22.861 "aliases": [ 00:14:22.861 "lvs/lvol" 00:14:22.861 ], 00:14:22.861 "product_name": "Logical Volume", 00:14:22.861 "block_size": 4096, 00:14:22.861 "num_blocks": 38912, 00:14:22.861 "uuid": "b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c", 00:14:22.861 "assigned_rate_limits": { 00:14:22.861 "rw_ios_per_sec": 0, 00:14:22.861 "rw_mbytes_per_sec": 0, 00:14:22.861 "r_mbytes_per_sec": 0, 00:14:22.861 "w_mbytes_per_sec": 0 00:14:22.861 }, 00:14:22.861 "claimed": false, 00:14:22.861 "zoned": false, 00:14:22.861 "supported_io_types": { 00:14:22.861 "read": true, 00:14:22.861 "write": true, 00:14:22.861 "unmap": true, 00:14:22.861 "flush": false, 00:14:22.861 "reset": true, 00:14:22.861 "nvme_admin": false, 00:14:22.861 "nvme_io": false, 00:14:22.861 "nvme_io_md": false, 00:14:22.861 "write_zeroes": true, 00:14:22.861 "zcopy": false, 00:14:22.861 "get_zone_info": false, 00:14:22.861 "zone_management": false, 00:14:22.861 "zone_append": false, 00:14:22.861 "compare": false, 00:14:22.861 "compare_and_write": false, 00:14:22.861 "abort": false, 00:14:22.861 "seek_hole": true, 00:14:22.861 "seek_data": true, 00:14:22.861 "copy": false, 00:14:22.861 "nvme_iov_md": false 00:14:22.861 }, 00:14:22.861 "driver_specific": { 00:14:22.861 "lvol": { 00:14:22.861 "lvol_store_uuid": "7da00427-4f87-435e-8d30-20b94608a1ee", 00:14:22.861 "base_bdev": "aio_bdev", 00:14:22.861 "thin_provision": false, 00:14:22.861 "num_allocated_clusters": 38, 00:14:22.861 "snapshot": false, 00:14:22.861 "clone": false, 00:14:22.861 "esnap_clone": false 00:14:22.861 } 00:14:22.861 } 00:14:22.861 } 00:14:22.861 ] 00:14:22.861 19:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:22.861 19:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7da00427-4f87-435e-8d30-20b94608a1ee 00:14:22.861 19:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:23.120 19:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:23.120 19:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7da00427-4f87-435e-8d30-20b94608a1ee 00:14:23.120 19:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:23.380 19:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:23.380 19:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b9fc2ebb-8bcb-4fa8-96aa-4ecab0c6148c 00:14:23.380 19:06:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7da00427-4f87-435e-8d30-20b94608a1ee 00:14:23.640 19:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:23.900 00:14:23.900 real 0m17.794s 00:14:23.900 user 0m45.536s 00:14:23.900 sys 0m3.748s 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:23.900 ************************************ 00:14:23.900 END TEST lvs_grow_dirty 00:14:23.900 ************************************ 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:23.900 nvmf_trace.0 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:23.900 rmmod nvme_tcp 00:14:23.900 rmmod nvme_fabrics 00:14:23.900 rmmod nvme_keyring 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 258980 ']' 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 258980 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 258980 ']' 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 258980 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 258980 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 258980' 00:14:23.900 killing process with pid 258980 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 258980 00:14:23.900 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 258980 00:14:24.159 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:24.159 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:24.159 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:24.159 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:24.159 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:24.159 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.159 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.159 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.696 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:26.696 00:14:26.696 real 0m43.127s 00:14:26.696 user 1m7.107s 00:14:26.696 sys 0m9.890s 00:14:26.696 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:26.696 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:26.696 ************************************ 00:14:26.696 END TEST nvmf_lvs_grow 00:14:26.696 ************************************ 00:14:26.696 19:06:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:26.696 19:06:28 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:26.696 19:06:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:26.696 19:06:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.696 19:06:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:26.696 ************************************ 00:14:26.696 START TEST nvmf_bdev_io_wait 00:14:26.696 ************************************ 00:14:26.696 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:26.696 * Looking for test storage... 00:14:26.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.696 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.696 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:26.696 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.696 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.696 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.696 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.696 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.696 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.696 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.696 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.696 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.696 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.696 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:26.697 19:06:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:32.057 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:32.057 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:32.057 Found net devices under 0000:86:00.0: cvl_0_0 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.057 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:32.057 Found net devices under 0000:86:00.1: cvl_0_1 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:32.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:14:32.058 00:14:32.058 --- 10.0.0.2 ping statistics --- 00:14:32.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.058 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:14:32.058 00:14:32.058 --- 10.0.0.1 ping statistics --- 00:14:32.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.058 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=263034 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 263034 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 263034 ']' 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.058 19:06:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:32.335 [2024-07-12 19:06:34.636217] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:14:32.335 [2024-07-12 19:06:34.636273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.335 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.335 [2024-07-12 19:06:34.703256] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:32.335 [2024-07-12 19:06:34.777048] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.335 [2024-07-12 19:06:34.777089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.335 [2024-07-12 19:06:34.777098] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.335 [2024-07-12 19:06:34.777103] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.335 [2024-07-12 19:06:34.777108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.335 [2024-07-12 19:06:34.777184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.335 [2024-07-12 19:06:34.777301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.335 [2024-07-12 19:06:34.777336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.335 [2024-07-12 19:06:34.777338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.938 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.938 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:32.938 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.938 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.938 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:32.938 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.938 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:32.938 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.938 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:32.938 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.938 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:32.938 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.938 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:33.202 [2024-07-12 19:06:35.547111] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:33.202 Malloc0 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.202 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:33.203 [2024-07-12 19:06:35.603069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=263287 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=263289 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:33.203 { 00:14:33.203 "params": { 00:14:33.203 "name": "Nvme$subsystem", 00:14:33.203 "trtype": "$TEST_TRANSPORT", 00:14:33.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:33.203 "adrfam": "ipv4", 00:14:33.203 "trsvcid": "$NVMF_PORT", 00:14:33.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:33.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:33.203 "hdgst": ${hdgst:-false}, 00:14:33.203 "ddgst": ${ddgst:-false} 00:14:33.203 }, 00:14:33.203 "method": "bdev_nvme_attach_controller" 00:14:33.203 } 00:14:33.203 EOF 00:14:33.203 )") 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=263291 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=263294 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:33.203 { 00:14:33.203 "params": { 00:14:33.203 "name": "Nvme$subsystem", 00:14:33.203 "trtype": "$TEST_TRANSPORT", 00:14:33.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:33.203 "adrfam": "ipv4", 00:14:33.203 "trsvcid": "$NVMF_PORT", 00:14:33.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:33.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:33.203 "hdgst": ${hdgst:-false}, 00:14:33.203 "ddgst": ${ddgst:-false} 00:14:33.203 }, 00:14:33.203 "method": "bdev_nvme_attach_controller" 00:14:33.203 } 00:14:33.203 EOF 00:14:33.203 )") 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:33.203 { 00:14:33.203 "params": { 00:14:33.203 "name": "Nvme$subsystem", 00:14:33.203 "trtype": "$TEST_TRANSPORT", 00:14:33.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:33.203 "adrfam": "ipv4", 00:14:33.203 "trsvcid": "$NVMF_PORT", 00:14:33.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:33.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:33.203 "hdgst": ${hdgst:-false}, 00:14:33.203 "ddgst": ${ddgst:-false} 00:14:33.203 }, 00:14:33.203 "method": "bdev_nvme_attach_controller" 00:14:33.203 } 00:14:33.203 EOF 00:14:33.203 )") 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:33.203 { 00:14:33.203 "params": { 00:14:33.203 "name": "Nvme$subsystem", 00:14:33.203 "trtype": "$TEST_TRANSPORT", 00:14:33.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:33.203 "adrfam": "ipv4", 00:14:33.203 "trsvcid": "$NVMF_PORT", 00:14:33.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:33.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:33.203 "hdgst": ${hdgst:-false}, 00:14:33.203 "ddgst": ${ddgst:-false} 00:14:33.203 }, 00:14:33.203 "method": "bdev_nvme_attach_controller" 00:14:33.203 } 00:14:33.203 EOF 00:14:33.203 )") 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 263287 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:33.203 "params": { 00:14:33.203 "name": "Nvme1", 00:14:33.203 "trtype": "tcp", 00:14:33.203 "traddr": "10.0.0.2", 00:14:33.203 "adrfam": "ipv4", 00:14:33.203 "trsvcid": "4420", 00:14:33.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:33.203 "hdgst": false, 00:14:33.203 "ddgst": false 00:14:33.203 }, 00:14:33.203 "method": "bdev_nvme_attach_controller" 00:14:33.203 }' 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:33.203 "params": { 00:14:33.203 "name": "Nvme1", 00:14:33.203 "trtype": "tcp", 00:14:33.203 "traddr": "10.0.0.2", 00:14:33.203 "adrfam": "ipv4", 00:14:33.203 "trsvcid": "4420", 00:14:33.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:33.203 "hdgst": false, 00:14:33.203 "ddgst": false 00:14:33.203 }, 00:14:33.203 "method": "bdev_nvme_attach_controller" 00:14:33.203 }' 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:33.203 "params": { 00:14:33.203 "name": "Nvme1", 00:14:33.203 "trtype": "tcp", 00:14:33.203 "traddr": "10.0.0.2", 00:14:33.203 "adrfam": "ipv4", 00:14:33.203 "trsvcid": "4420", 00:14:33.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:33.203 "hdgst": false, 00:14:33.203 "ddgst": false 00:14:33.203 }, 00:14:33.203 "method": "bdev_nvme_attach_controller" 00:14:33.203 }' 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:33.203 19:06:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:33.203 "params": { 00:14:33.203 "name": "Nvme1", 00:14:33.203 "trtype": "tcp", 00:14:33.203 "traddr": "10.0.0.2", 00:14:33.203 "adrfam": "ipv4", 00:14:33.203 "trsvcid": "4420", 00:14:33.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:33.203 "hdgst": false, 00:14:33.203 "ddgst": false 00:14:33.203 }, 00:14:33.203 "method": "bdev_nvme_attach_controller" 00:14:33.203 }' 00:14:33.203 [2024-07-12 19:06:35.651118] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:14:33.203 [2024-07-12 19:06:35.651170] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:33.203 [2024-07-12 19:06:35.652287] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:14:33.203 [2024-07-12 19:06:35.652326] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:33.203 [2024-07-12 19:06:35.654359] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:14:33.203 [2024-07-12 19:06:35.654407] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:33.203 [2024-07-12 19:06:35.658174] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:14:33.203 [2024-07-12 19:06:35.658211] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:33.203 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.476 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.476 [2024-07-12 19:06:35.833461] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.476 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.476 [2024-07-12 19:06:35.911285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:33.476 [2024-07-12 19:06:35.932000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.476 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.476 [2024-07-12 19:06:36.008140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:33.476 [2024-07-12 19:06:36.025060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.750 [2024-07-12 19:06:36.083764] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.750 [2024-07-12 19:06:36.111326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:33.750 [2024-07-12 19:06:36.161401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:34.021 Running I/O for 1 seconds... 00:14:34.021 Running I/O for 1 seconds... 00:14:34.021 Running I/O for 1 seconds... 00:14:34.021 Running I/O for 1 seconds... 00:14:34.988 00:14:34.988 Latency(us) 00:14:34.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.988 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:34.988 Nvme1n1 : 1.00 243325.82 950.49 0.00 0.00 524.11 208.36 666.05 00:14:34.988 =================================================================================================================== 00:14:34.988 Total : 243325.82 950.49 0.00 0.00 524.11 208.36 666.05 00:14:34.988 00:14:34.988 Latency(us) 00:14:34.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.988 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:34.988 Nvme1n1 : 1.01 7729.09 30.19 0.00 0.00 16471.51 6496.61 23706.94 00:14:34.988 =================================================================================================================== 00:14:34.988 Total : 7729.09 30.19 0.00 0.00 16471.51 6496.61 23706.94 00:14:34.988 00:14:34.988 Latency(us) 00:14:34.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.988 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:34.988 Nvme1n1 : 1.01 7311.72 28.56 0.00 0.00 17450.04 6069.20 35104.50 00:14:34.988 =================================================================================================================== 00:14:34.988 Total : 7311.72 28.56 0.00 0.00 17450.04 6069.20 35104.50 00:14:34.988 00:14:34.988 Latency(us) 00:14:34.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.988 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:34.988 Nvme1n1 : 1.00 12682.47 49.54 0.00 0.00 10063.45 4673.00 22567.18 00:14:34.988 =================================================================================================================== 00:14:34.988 Total : 12682.47 49.54 0.00 0.00 10063.45 4673.00 22567.18 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 263289 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 263291 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 263294 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.257 rmmod nvme_tcp 00:14:35.257 rmmod nvme_fabrics 00:14:35.257 rmmod nvme_keyring 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.257 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:35.526 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:35.526 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 263034 ']' 00:14:35.526 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 263034 00:14:35.526 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 263034 ']' 00:14:35.526 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 263034 00:14:35.526 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:35.526 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:35.526 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 263034 00:14:35.526 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:35.526 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:35.526 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 263034' 00:14:35.526 killing process with pid 263034 00:14:35.526 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 263034 00:14:35.526 19:06:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 263034 00:14:35.526 19:06:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:35.526 19:06:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:35.526 19:06:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:35.526 19:06:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.526 19:06:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:35.526 19:06:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.526 19:06:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.526 19:06:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.135 19:06:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:38.135 00:14:38.135 real 0m11.352s 00:14:38.135 user 0m20.498s 00:14:38.135 sys 0m5.910s 00:14:38.135 19:06:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:38.135 19:06:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:38.135 ************************************ 00:14:38.135 END TEST nvmf_bdev_io_wait 00:14:38.135 ************************************ 00:14:38.135 19:06:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:38.135 19:06:40 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:38.135 19:06:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:38.135 19:06:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.135 19:06:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:38.135 ************************************ 00:14:38.135 START TEST nvmf_queue_depth 00:14:38.135 ************************************ 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:38.135 * Looking for test storage... 00:14:38.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:38.135 19:06:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:38.136 19:06:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:43.627 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:43.627 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:43.627 Found net devices under 0000:86:00.0: cvl_0_0 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:43.627 Found net devices under 0000:86:00.1: cvl_0_1 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.627 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:43.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:14:43.628 00:14:43.628 --- 10.0.0.2 ping statistics --- 00:14:43.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.628 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:14:43.628 00:14:43.628 --- 10.0.0.1 ping statistics --- 00:14:43.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.628 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:43.628 19:06:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:43.628 19:06:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:43.628 19:06:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:43.628 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:43.628 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:43.628 19:06:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=267094 00:14:43.628 19:06:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 267094 00:14:43.628 19:06:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:43.628 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 267094 ']' 00:14:43.628 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.628 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.628 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.628 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.628 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:43.628 [2024-07-12 19:06:46.074851] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:14:43.628 [2024-07-12 19:06:46.074893] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.628 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.628 [2024-07-12 19:06:46.145747] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.914 [2024-07-12 19:06:46.219770] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.914 [2024-07-12 19:06:46.219809] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.914 [2024-07-12 19:06:46.219816] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.914 [2024-07-12 19:06:46.219822] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.914 [2024-07-12 19:06:46.219827] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.914 [2024-07-12 19:06:46.219850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:44.554 [2024-07-12 19:06:46.919047] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:44.554 Malloc0 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:44.554 [2024-07-12 19:06:46.985794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=267339 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 267339 /var/tmp/bdevperf.sock 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 267339 ']' 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.554 19:06:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:44.555 [2024-07-12 19:06:47.035893] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:14:44.555 [2024-07-12 19:06:47.035937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid267339 ] 00:14:44.555 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.839 [2024-07-12 19:06:47.102561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.839 [2024-07-12 19:06:47.180931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.452 19:06:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.452 19:06:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:45.452 19:06:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:45.452 19:06:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.452 19:06:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:45.452 NVMe0n1 00:14:45.452 19:06:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.452 19:06:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:45.800 Running I/O for 10 seconds... 00:14:55.968 00:14:55.968 Latency(us) 00:14:55.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.968 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:55.968 Verification LBA range: start 0x0 length 0x4000 00:14:55.968 NVMe0n1 : 10.06 12453.58 48.65 0.00 0.00 81926.23 15956.59 54252.41 00:14:55.968 =================================================================================================================== 00:14:55.968 Total : 12453.58 48.65 0.00 0.00 81926.23 15956.59 54252.41 00:14:55.968 0 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 267339 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 267339 ']' 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 267339 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 267339 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 267339' 00:14:55.968 killing process with pid 267339 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 267339 00:14:55.968 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.968 00:14:55.968 Latency(us) 00:14:55.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.968 =================================================================================================================== 00:14:55.968 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 267339 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:55.968 rmmod nvme_tcp 00:14:55.968 rmmod nvme_fabrics 00:14:55.968 rmmod nvme_keyring 00:14:55.968 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:55.969 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:55.969 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:55.969 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 267094 ']' 00:14:55.969 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 267094 00:14:55.969 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 267094 ']' 00:14:55.969 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 267094 00:14:55.969 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:55.969 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.969 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 267094 00:14:55.969 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:55.969 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:55.969 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 267094' 00:14:55.969 killing process with pid 267094 00:14:55.969 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 267094 00:14:55.969 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 267094 00:14:56.256 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:56.256 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:56.256 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:56.256 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.256 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.256 19:06:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.256 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.256 19:06:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.235 19:07:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:58.235 00:14:58.235 real 0m20.549s 00:14:58.235 user 0m24.951s 00:14:58.235 sys 0m5.809s 00:14:58.235 19:07:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.235 19:07:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.235 ************************************ 00:14:58.235 END TEST nvmf_queue_depth 00:14:58.235 ************************************ 00:14:58.235 19:07:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:58.235 19:07:00 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:58.235 19:07:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:58.235 19:07:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.235 19:07:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:58.515 ************************************ 00:14:58.515 START TEST nvmf_target_multipath 00:14:58.515 ************************************ 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:58.515 * Looking for test storage... 00:14:58.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.515 19:07:00 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:58.516 19:07:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:03.959 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:03.959 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.959 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:03.960 Found net devices under 0000:86:00.0: cvl_0_0 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:03.960 Found net devices under 0000:86:00.1: cvl_0_1 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:03.960 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.251 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.251 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:04.251 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.251 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.251 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.251 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:04.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:15:04.251 00:15:04.251 --- 10.0.0.2 ping statistics --- 00:15:04.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.251 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:15:04.251 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:15:04.252 00:15:04.252 --- 10.0.0.1 ping statistics --- 00:15:04.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.252 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:04.252 only one NIC for nvmf test 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:04.252 rmmod nvme_tcp 00:15:04.252 rmmod nvme_fabrics 00:15:04.252 rmmod nvme_keyring 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.252 19:07:06 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:06.285 00:15:06.285 real 0m8.029s 00:15:06.285 user 0m1.671s 00:15:06.285 sys 0m4.330s 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:06.285 19:07:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:06.285 ************************************ 00:15:06.285 END TEST nvmf_target_multipath 00:15:06.285 ************************************ 00:15:06.604 19:07:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:06.604 19:07:08 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:06.604 19:07:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:06.604 19:07:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.604 19:07:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:06.604 ************************************ 00:15:06.604 START TEST nvmf_zcopy 00:15:06.604 ************************************ 00:15:06.604 19:07:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:06.604 * Looking for test storage... 00:15:06.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.604 19:07:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:06.604 19:07:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:13.179 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:13.180 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:13.180 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:13.180 Found net devices under 0000:86:00.0: cvl_0_0 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:13.180 Found net devices under 0000:86:00.1: cvl_0_1 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:13.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:15:13.180 00:15:13.180 --- 10.0.0.2 ping statistics --- 00:15:13.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.180 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:13.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:15:13.180 00:15:13.180 --- 10.0.0.1 ping statistics --- 00:15:13.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.180 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=276163 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 276163 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 276163 ']' 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.180 19:07:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:13.180 [2024-07-12 19:07:14.804868] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:15:13.180 [2024-07-12 19:07:14.804909] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.180 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.180 [2024-07-12 19:07:14.874280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.180 [2024-07-12 19:07:14.952207] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.180 [2024-07-12 19:07:14.952243] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.180 [2024-07-12 19:07:14.952250] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.180 [2024-07-12 19:07:14.952256] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.180 [2024-07-12 19:07:14.952261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.180 [2024-07-12 19:07:14.952297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.180 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:13.180 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:13.180 19:07:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:13.180 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:13.180 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:13.180 19:07:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.180 19:07:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:13.180 19:07:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:13.180 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.180 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:13.180 [2024-07-12 19:07:15.645943] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.180 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.180 19:07:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:13.180 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.180 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:13.181 [2024-07-12 19:07:15.666069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:13.181 malloc0 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:13.181 { 00:15:13.181 "params": { 00:15:13.181 "name": "Nvme$subsystem", 00:15:13.181 "trtype": "$TEST_TRANSPORT", 00:15:13.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.181 "adrfam": "ipv4", 00:15:13.181 "trsvcid": "$NVMF_PORT", 00:15:13.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.181 "hdgst": ${hdgst:-false}, 00:15:13.181 "ddgst": ${ddgst:-false} 00:15:13.181 }, 00:15:13.181 "method": "bdev_nvme_attach_controller" 00:15:13.181 } 00:15:13.181 EOF 00:15:13.181 )") 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:13.181 19:07:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:13.181 "params": { 00:15:13.181 "name": "Nvme1", 00:15:13.181 "trtype": "tcp", 00:15:13.181 "traddr": "10.0.0.2", 00:15:13.181 "adrfam": "ipv4", 00:15:13.181 "trsvcid": "4420", 00:15:13.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.181 "hdgst": false, 00:15:13.181 "ddgst": false 00:15:13.181 }, 00:15:13.181 "method": "bdev_nvme_attach_controller" 00:15:13.181 }' 00:15:13.181 [2024-07-12 19:07:15.741916] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:15:13.181 [2024-07-12 19:07:15.741960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276313 ] 00:15:13.440 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.440 [2024-07-12 19:07:15.809409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.440 [2024-07-12 19:07:15.882279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.700 Running I/O for 10 seconds... 00:15:23.684 00:15:23.684 Latency(us) 00:15:23.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.684 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:23.684 Verification LBA range: start 0x0 length 0x1000 00:15:23.684 Nvme1n1 : 10.01 8736.53 68.25 0.00 0.00 14608.55 1310.72 25302.59 00:15:23.684 =================================================================================================================== 00:15:23.684 Total : 8736.53 68.25 0.00 0.00 14608.55 1310.72 25302.59 00:15:23.944 19:07:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=278136 00:15:23.944 19:07:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:23.944 19:07:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:23.944 19:07:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:23.944 19:07:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:23.944 19:07:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:23.944 19:07:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:23.944 19:07:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:23.944 19:07:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:23.944 { 00:15:23.944 "params": { 00:15:23.944 "name": "Nvme$subsystem", 00:15:23.944 "trtype": "$TEST_TRANSPORT", 00:15:23.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:23.944 "adrfam": "ipv4", 00:15:23.944 "trsvcid": "$NVMF_PORT", 00:15:23.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:23.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:23.944 "hdgst": ${hdgst:-false}, 00:15:23.944 "ddgst": ${ddgst:-false} 00:15:23.944 }, 00:15:23.944 "method": "bdev_nvme_attach_controller" 00:15:23.944 } 00:15:23.944 EOF 00:15:23.944 )") 00:15:23.944 19:07:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:23.944 [2024-07-12 19:07:26.429721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.944 [2024-07-12 19:07:26.429752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.944 19:07:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:23.944 19:07:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:23.944 19:07:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:23.944 "params": { 00:15:23.944 "name": "Nvme1", 00:15:23.944 "trtype": "tcp", 00:15:23.944 "traddr": "10.0.0.2", 00:15:23.944 "adrfam": "ipv4", 00:15:23.944 "trsvcid": "4420", 00:15:23.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.944 "hdgst": false, 00:15:23.944 "ddgst": false 00:15:23.944 }, 00:15:23.944 "method": "bdev_nvme_attach_controller" 00:15:23.944 }' 00:15:23.944 [2024-07-12 19:07:26.441715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.944 [2024-07-12 19:07:26.441728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.944 [2024-07-12 19:07:26.449736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.944 [2024-07-12 19:07:26.449746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.944 [2024-07-12 19:07:26.461769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.944 [2024-07-12 19:07:26.461778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.944 [2024-07-12 19:07:26.466878] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:15:23.944 [2024-07-12 19:07:26.466917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278136 ] 00:15:23.944 [2024-07-12 19:07:26.473799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.944 [2024-07-12 19:07:26.473811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.944 [2024-07-12 19:07:26.485834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.944 [2024-07-12 19:07:26.485843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.944 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.944 [2024-07-12 19:07:26.497864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.944 [2024-07-12 19:07:26.497872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.944 [2024-07-12 19:07:26.505886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.944 [2024-07-12 19:07:26.505895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.513906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.513916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.521927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.521936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.529948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.529957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.534750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.210 [2024-07-12 19:07:26.541987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.541998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.550002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.550013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.558024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.558033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.566044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.566053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.574071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.574090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.586103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.586116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.594119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.594128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.602142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.602150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.609377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.210 [2024-07-12 19:07:26.610165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.610174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.618184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.618194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.630233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.630254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.638247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.638264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.646263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.646274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.654283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.654294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.662299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.662309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.674335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.674347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.682352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.682361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.690374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.690383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.698413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.698429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.706431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.706445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.718458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.718470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.726477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.726489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.734496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.734505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.742519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.742528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.750539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.750547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.762578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.762589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.210 [2024-07-12 19:07:26.770598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.210 [2024-07-12 19:07:26.770610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.778620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.778632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.786639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.786659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.794659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.794667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.806693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.806702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.814715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.814724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.822739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.822751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.830758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.830767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.838779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.838788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.850818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.850830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.858838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.858850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.866862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.866873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.874880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.874889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.882905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.882913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.894953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.894962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.902960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.902971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.910981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.910990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.919010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.919026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.927028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.927038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 Running I/O for 5 seconds... 00:15:24.472 [2024-07-12 19:07:26.941859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.941878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.950476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.950494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.959933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.959951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.968474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.968492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.977870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.977888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.987346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.987364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:26.995948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:26.995966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:27.005345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:27.005363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:27.014776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:27.014793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:27.023973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:27.023991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.472 [2024-07-12 19:07:27.032760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.472 [2024-07-12 19:07:27.032778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.730 [2024-07-12 19:07:27.042076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.730 [2024-07-12 19:07:27.042094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.730 [2024-07-12 19:07:27.050674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.730 [2024-07-12 19:07:27.050691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.730 [2024-07-12 19:07:27.060085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.730 [2024-07-12 19:07:27.060103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.730 [2024-07-12 19:07:27.069975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.730 [2024-07-12 19:07:27.069992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.730 [2024-07-12 19:07:27.084136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.730 [2024-07-12 19:07:27.084155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.730 [2024-07-12 19:07:27.093045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.730 [2024-07-12 19:07:27.093063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.730 [2024-07-12 19:07:27.101829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.730 [2024-07-12 19:07:27.101847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.730 [2024-07-12 19:07:27.110544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.730 [2024-07-12 19:07:27.110562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.730 [2024-07-12 19:07:27.119005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.730 [2024-07-12 19:07:27.119022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.128283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.128300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.137445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.137463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.146831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.146849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.155417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.155434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.164665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.164683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.173293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.173310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.182285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.182302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.191497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.191516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.200005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.200023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.209200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.209217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.223754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.223774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.232408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.232427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.241187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.241206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.250681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.250699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.260025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.260043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.269245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.269263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.278352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.278369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.287493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.287511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.731 [2024-07-12 19:07:27.296288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.731 [2024-07-12 19:07:27.296306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.989 [2024-07-12 19:07:27.305623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.989 [2024-07-12 19:07:27.305643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.989 [2024-07-12 19:07:27.315378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.989 [2024-07-12 19:07:27.315396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.989 [2024-07-12 19:07:27.324624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.989 [2024-07-12 19:07:27.324646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.989 [2024-07-12 19:07:27.333547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.989 [2024-07-12 19:07:27.333564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.342858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.342876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.352534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.352551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.367247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.367265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.374775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.374793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.383562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.383579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.392425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.392443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.401681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.401698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.416176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.416195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.424892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.424911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.433630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.433648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.443044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.443062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.452341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.452360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.466949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.466967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.480660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.480678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.489485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.489514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.499534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.499551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.508041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.508058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.522387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.522409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.531102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.531120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.539746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.539764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.990 [2024-07-12 19:07:27.549299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.990 [2024-07-12 19:07:27.549318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.558046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.558064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.567262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.567281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.575748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.575766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.584367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.584385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.592988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.593005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.601472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.601489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.615623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.615641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.624358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.624376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.633747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.633765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.642450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.642467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.651584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.651602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.666032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.666050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.674775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.674792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.683671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.683689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.692878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.692895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.702052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.702073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.711178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.711195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.720922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.720939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.729607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.729624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.738329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.738346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.747466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.747483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.761801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.761819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.770816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.770834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.780195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.780212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.789492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.789510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.804161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.804178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.249 [2024-07-12 19:07:27.814900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.249 [2024-07-12 19:07:27.814918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.824269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.824287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.833578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.833596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.842973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.842990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.857774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.857791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.872892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.872912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.886775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.886794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.900347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.900365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.914133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.914156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.923157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.923174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.937430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.937449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.951492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.951510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.960316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.960333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.969840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.969857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.984264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.984282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:27.998448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:27.998466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:28.012400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:28.012418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:28.026097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:28.026115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.508 [2024-07-12 19:07:28.039917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.508 [2024-07-12 19:07:28.039935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.509 [2024-07-12 19:07:28.053507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.509 [2024-07-12 19:07:28.053525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.509 [2024-07-12 19:07:28.067477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.509 [2024-07-12 19:07:28.067496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.767 [2024-07-12 19:07:28.081384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.767 [2024-07-12 19:07:28.081403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.767 [2024-07-12 19:07:28.094878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.767 [2024-07-12 19:07:28.094896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.767 [2024-07-12 19:07:28.108897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.767 [2024-07-12 19:07:28.108917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.767 [2024-07-12 19:07:28.123102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.767 [2024-07-12 19:07:28.123121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.767 [2024-07-12 19:07:28.133926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.767 [2024-07-12 19:07:28.133944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.767 [2024-07-12 19:07:28.148140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.767 [2024-07-12 19:07:28.148158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.767 [2024-07-12 19:07:28.157161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.767 [2024-07-12 19:07:28.157178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.767 [2024-07-12 19:07:28.172121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.767 [2024-07-12 19:07:28.172139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.767 [2024-07-12 19:07:28.187732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.768 [2024-07-12 19:07:28.187750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.768 [2024-07-12 19:07:28.201707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.768 [2024-07-12 19:07:28.201725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.768 [2024-07-12 19:07:28.210686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.768 [2024-07-12 19:07:28.210704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.768 [2024-07-12 19:07:28.224798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.768 [2024-07-12 19:07:28.224817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.768 [2024-07-12 19:07:28.233718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.768 [2024-07-12 19:07:28.233736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.768 [2024-07-12 19:07:28.243515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.768 [2024-07-12 19:07:28.243533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.768 [2024-07-12 19:07:28.257668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.768 [2024-07-12 19:07:28.257686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.768 [2024-07-12 19:07:28.271443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.768 [2024-07-12 19:07:28.271461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.768 [2024-07-12 19:07:28.285157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.768 [2024-07-12 19:07:28.285175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.768 [2024-07-12 19:07:28.293984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.768 [2024-07-12 19:07:28.294001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.768 [2024-07-12 19:07:28.308304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.768 [2024-07-12 19:07:28.308322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.768 [2024-07-12 19:07:28.322265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.768 [2024-07-12 19:07:28.322283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.768 [2024-07-12 19:07:28.331011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.768 [2024-07-12 19:07:28.331029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.345833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.345850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.357150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.357168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.366831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.366848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.381606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.381624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.392599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.392617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.407010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.407028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.415974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.415992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.430340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.430358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.444311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.444329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.453599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.453616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.468022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.468040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.477158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.477175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.491537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.491555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.505204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.505223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.519014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.519032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.532684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.532702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.542136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.542155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.550971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.550988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.565576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.565594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.574715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.574733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.026 [2024-07-12 19:07:28.589001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.026 [2024-07-12 19:07:28.589019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.603427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.603448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.614143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.614161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.628493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.628512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.641888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.641908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.656100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.656120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.666713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.666732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.681064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.681083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.694318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.694336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.703314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.703332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.712504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.712522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.726831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.726850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.740046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.740068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.754139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.754158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.763061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.763079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.772472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.772490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.781923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.781941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.791029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.791047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.805456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.805475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.819579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.819597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.830627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.830645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.285 [2024-07-12 19:07:28.844674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.285 [2024-07-12 19:07:28.844693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:28.858751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:28.858772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:28.872686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:28.872705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:28.886303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:28.886321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:28.895162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:28.895182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:28.904449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:28.904467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:28.918859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:28.918877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:28.932838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:28.932856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:28.941716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:28.941734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:28.951199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:28.951218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:28.965807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:28.965825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:28.977231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:28.977249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:28.991440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:28.991458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:29.000811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:29.000830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:29.014808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:29.014827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:29.023557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:29.023575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:29.037708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:29.037726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:29.051280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:29.051297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:29.064895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:29.064912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:29.078380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:29.078402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:29.087508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:29.087525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:29.096762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:29.096779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.544 [2024-07-12 19:07:29.111380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.544 [2024-07-12 19:07:29.111398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.125153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.125171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.134248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.134266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.142972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.142989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.157425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.157443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.171789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.171807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.186036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.186054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.196810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.196827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.211086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.211103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.224873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.224891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.238705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.238723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.252551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.252569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.266289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.266307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.279895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.279913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.294119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.294137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.304975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.304993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.319278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.319303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.333091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.333109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.347010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.347027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.803 [2024-07-12 19:07:29.361340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.803 [2024-07-12 19:07:29.361358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.372547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.372567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.381765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.381782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.391183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.391201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.400442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.400459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.415063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.415081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.426110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.426128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.440784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.440802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.454238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.454257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.463170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.463187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.472342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.472359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.486539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.486557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.499453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.499470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.508239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.508257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.517479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.517497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.531911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.531929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.545669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.545690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.559482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.559499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.568814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.568831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.578185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.578203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.062 [2024-07-12 19:07:29.592384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.062 [2024-07-12 19:07:29.592402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.063 [2024-07-12 19:07:29.606066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.063 [2024-07-12 19:07:29.606084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.063 [2024-07-12 19:07:29.620049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.063 [2024-07-12 19:07:29.620067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.063 [2024-07-12 19:07:29.628861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.063 [2024-07-12 19:07:29.628879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.321 [2024-07-12 19:07:29.638351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.321 [2024-07-12 19:07:29.638369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.321 [2024-07-12 19:07:29.647369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.321 [2024-07-12 19:07:29.647387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.321 [2024-07-12 19:07:29.661625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.321 [2024-07-12 19:07:29.661643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.321 [2024-07-12 19:07:29.675136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.321 [2024-07-12 19:07:29.675153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.321 [2024-07-12 19:07:29.684182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.321 [2024-07-12 19:07:29.684200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.321 [2024-07-12 19:07:29.698671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.321 [2024-07-12 19:07:29.698689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.321 [2024-07-12 19:07:29.712591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.321 [2024-07-12 19:07:29.712608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.321 [2024-07-12 19:07:29.725935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.321 [2024-07-12 19:07:29.725953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.321 [2024-07-12 19:07:29.740025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.321 [2024-07-12 19:07:29.740043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.321 [2024-07-12 19:07:29.753868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.321 [2024-07-12 19:07:29.753885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.321 [2024-07-12 19:07:29.767474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.321 [2024-07-12 19:07:29.767491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.322 [2024-07-12 19:07:29.781105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.322 [2024-07-12 19:07:29.781126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.322 [2024-07-12 19:07:29.794784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.322 [2024-07-12 19:07:29.794803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.322 [2024-07-12 19:07:29.803804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.322 [2024-07-12 19:07:29.803822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.322 [2024-07-12 19:07:29.812879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.322 [2024-07-12 19:07:29.812897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.322 [2024-07-12 19:07:29.821544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.322 [2024-07-12 19:07:29.821562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.322 [2024-07-12 19:07:29.830162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.322 [2024-07-12 19:07:29.830181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.322 [2024-07-12 19:07:29.844665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.322 [2024-07-12 19:07:29.844684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.322 [2024-07-12 19:07:29.855649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.322 [2024-07-12 19:07:29.855666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.322 [2024-07-12 19:07:29.864982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.322 [2024-07-12 19:07:29.865000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.322 [2024-07-12 19:07:29.873993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.322 [2024-07-12 19:07:29.874011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.322 [2024-07-12 19:07:29.883118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.322 [2024-07-12 19:07:29.883136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:29.897878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:29.897896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:29.908507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:29.908528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:29.917830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:29.917850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:29.932484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:29.932502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:29.943348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:29.943365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:29.957609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:29.957628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:29.971771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:29.971789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:29.983213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:29.983236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:29.991872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:29.991890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:30.000601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:30.000617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:30.015808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:30.015827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:30.026935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:30.026956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:30.035924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:30.035945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:30.045335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:30.045357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:30.054700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:30.054719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:30.069181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:30.069202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:30.083390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:30.083410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:30.092261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:30.092280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:30.100964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:30.100982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:30.110291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:30.110310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:30.125061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:30.125079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:30.138637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:30.138655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.580 [2024-07-12 19:07:30.147788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.580 [2024-07-12 19:07:30.147806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.162261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.162280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.176236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.176255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.190541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.190560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.205032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.205052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.216163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.216181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.225838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.225857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.235262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.235280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.249956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.249975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.259027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.259045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.268152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.268170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.276731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.276749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.286000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.286019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.300559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.300577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.314178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.314197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.328583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.328603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.339819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.339838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.349359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.349377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.363857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.363876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.372887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.372905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.387310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.387328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.839 [2024-07-12 19:07:30.401417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.839 [2024-07-12 19:07:30.401435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.415723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.415742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.430100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.430118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.439141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.439159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.448027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.448045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.462635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.462652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.476884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.476903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.487681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.487699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.496385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.496402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.504972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.504989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.519386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.519405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.533303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.533322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.547397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.547415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.561543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.561561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.575741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.575759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.586209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.586231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.600718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.600737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.614270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.614294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.628203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.628221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.641719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.641737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.099 [2024-07-12 19:07:30.655146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.099 [2024-07-12 19:07:30.655164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.358 [2024-07-12 19:07:30.669632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.669651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.684806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.684823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.698988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.699006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.712891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.712908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.726595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.726612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.735671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.735689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.750138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.750156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.764627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.764645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.775230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.775248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.784036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.784053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.793459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.793477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.808007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.808025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.821643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.821660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.830546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.830563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.844917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.844934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.858517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.858535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.872723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.872741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.886145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.886162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.900073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.900091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.359 [2024-07-12 19:07:30.913739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.359 [2024-07-12 19:07:30.913761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:30.927757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:30.927778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:30.941981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:30.941999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:30.955696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:30.955714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:30.969595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:30.969613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:30.983198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:30.983216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:30.992148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:30.992165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.006151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.006169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.019620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.019638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.028548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.028566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.042555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.042572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.055777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.055795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.069741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.069759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.083686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.083703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.097445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.097463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.111141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.111158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.120094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.120112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.134634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.134653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.143504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.143522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.157595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.157632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.166510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.166528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.619 [2024-07-12 19:07:31.181058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.619 [2024-07-12 19:07:31.181076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.194920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.194939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.208959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.208977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.217788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.217806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.227047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.227064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.241521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.241539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.255067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.255084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.268975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.268992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.277884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.277902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.286739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.286756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.300684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.300702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.314178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.314196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.327915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.327933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.341600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.341618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.354997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.355014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.368725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.368742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.382417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.382436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.395969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.395992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.404966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.404986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.414356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.414376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.422926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.422945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.878 [2024-07-12 19:07:31.437532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.878 [2024-07-12 19:07:31.437551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.136 [2024-07-12 19:07:31.451288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.136 [2024-07-12 19:07:31.451307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.136 [2024-07-12 19:07:31.460108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.460126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.469011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.469029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.478257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.478275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.492489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.492507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.506412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.506430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.517637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.517655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.531441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.531459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.540274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.540292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.562682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.562700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.571563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.571581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.580118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.580137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.588884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.588903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.603030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.603050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.611886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.611908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.625879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.625898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.639565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.639583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.653434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.653452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.666853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.666872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.675745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.675763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.684832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.684850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.694598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.694616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.137 [2024-07-12 19:07:31.703369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.137 [2024-07-12 19:07:31.703387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.718032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.718050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.727017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.727034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.741394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.741412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.754981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.754999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.763837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.763854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.777634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.777652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.790722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.790740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.799582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.799599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.808788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.808806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.817868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.817885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.832299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.832316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.841181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.841198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.855232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.855250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.868863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.868881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.882993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.883011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.896688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.896705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.905481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.905498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.914616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.914633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.924316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.924333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.938391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.938408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 [2024-07-12 19:07:31.948182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.948201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.396 00:15:29.396 Latency(us) 00:15:29.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.396 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:29.396 Nvme1n1 : 5.01 16807.02 131.30 0.00 0.00 7608.09 3191.32 17438.27 00:15:29.396 =================================================================================================================== 00:15:29.396 Total : 16807.02 131.30 0.00 0.00 7608.09 3191.32 17438.27 00:15:29.396 [2024-07-12 19:07:31.960217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.396 [2024-07-12 19:07:31.960238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.655 [2024-07-12 19:07:31.972254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.655 [2024-07-12 19:07:31.972266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.656 [2024-07-12 19:07:31.984295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.656 [2024-07-12 19:07:31.984313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.656 [2024-07-12 19:07:31.996316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.656 [2024-07-12 19:07:31.996329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.656 [2024-07-12 19:07:32.008341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.656 [2024-07-12 19:07:32.008354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.656 [2024-07-12 19:07:32.020379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.656 [2024-07-12 19:07:32.020393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.656 [2024-07-12 19:07:32.032409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.656 [2024-07-12 19:07:32.032423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.656 [2024-07-12 19:07:32.044439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.656 [2024-07-12 19:07:32.044452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.656 [2024-07-12 19:07:32.056470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.656 [2024-07-12 19:07:32.056480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.656 [2024-07-12 19:07:32.068498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.656 [2024-07-12 19:07:32.068507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.656 [2024-07-12 19:07:32.080538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.656 [2024-07-12 19:07:32.080550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.656 [2024-07-12 19:07:32.092565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.656 [2024-07-12 19:07:32.092574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.656 [2024-07-12 19:07:32.104599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.656 [2024-07-12 19:07:32.104609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.656 [2024-07-12 19:07:32.116629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.656 [2024-07-12 19:07:32.116640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.656 [2024-07-12 19:07:32.128667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.656 [2024-07-12 19:07:32.128677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (278136) - No such process 00:15:29.656 19:07:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 278136 00:15:29.656 19:07:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.656 19:07:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.656 19:07:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:29.656 19:07:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.656 19:07:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:29.656 19:07:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.656 19:07:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:29.656 delay0 00:15:29.656 19:07:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.656 19:07:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:29.656 19:07:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.656 19:07:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:29.656 19:07:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.656 19:07:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:29.656 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.915 [2024-07-12 19:07:32.260986] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:36.484 Initializing NVMe Controllers 00:15:36.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:36.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:36.485 Initialization complete. Launching workers. 00:15:36.485 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 819 00:15:36.485 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1106, failed to submit 33 00:15:36.485 success 933, unsuccess 173, failed 0 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:36.485 rmmod nvme_tcp 00:15:36.485 rmmod nvme_fabrics 00:15:36.485 rmmod nvme_keyring 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 276163 ']' 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 276163 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 276163 ']' 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 276163 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 276163 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 276163' 00:15:36.485 killing process with pid 276163 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 276163 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 276163 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.485 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.394 19:07:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:38.394 00:15:38.394 real 0m31.875s 00:15:38.394 user 0m44.480s 00:15:38.394 sys 0m9.489s 00:15:38.394 19:07:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:38.394 19:07:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:38.394 ************************************ 00:15:38.394 END TEST nvmf_zcopy 00:15:38.394 ************************************ 00:15:38.394 19:07:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:38.394 19:07:40 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:38.394 19:07:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:38.394 19:07:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.394 19:07:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:38.394 ************************************ 00:15:38.394 START TEST nvmf_nmic 00:15:38.394 ************************************ 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:38.394 * Looking for test storage... 00:15:38.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.394 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:38.654 19:07:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:43.932 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:43.932 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:43.932 Found net devices under 0000:86:00.0: cvl_0_0 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:43.932 Found net devices under 0000:86:00.1: cvl_0_1 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:43.932 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:44.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:15:44.191 00:15:44.191 --- 10.0.0.2 ping statistics --- 00:15:44.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.191 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:44.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:15:44.191 00:15:44.191 --- 10.0.0.1 ping statistics --- 00:15:44.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.191 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=283496 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 283496 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 283496 ']' 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.191 19:07:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:44.450 [2024-07-12 19:07:46.764899] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:15:44.450 [2024-07-12 19:07:46.764946] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.450 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.450 [2024-07-12 19:07:46.836717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:44.450 [2024-07-12 19:07:46.920495] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.450 [2024-07-12 19:07:46.920529] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.450 [2024-07-12 19:07:46.920537] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.450 [2024-07-12 19:07:46.920543] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.450 [2024-07-12 19:07:46.920549] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.450 [2024-07-12 19:07:46.920595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.450 [2024-07-12 19:07:46.920630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.450 [2024-07-12 19:07:46.920711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.450 [2024-07-12 19:07:46.920713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:45.017 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.017 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:45.017 19:07:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:45.017 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:45.017 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:45.276 [2024-07-12 19:07:47.622156] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:45.276 Malloc0 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:45.276 [2024-07-12 19:07:47.674087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:45.276 test case1: single bdev can't be used in multiple subsystems 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:45.276 [2024-07-12 19:07:47.698006] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:45.276 [2024-07-12 19:07:47.698025] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:45.276 [2024-07-12 19:07:47.698032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.276 request: 00:15:45.276 { 00:15:45.276 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:45.276 "namespace": { 00:15:45.276 "bdev_name": "Malloc0", 00:15:45.276 "no_auto_visible": false 00:15:45.276 }, 00:15:45.276 "method": "nvmf_subsystem_add_ns", 00:15:45.276 "req_id": 1 00:15:45.276 } 00:15:45.276 Got JSON-RPC error response 00:15:45.276 response: 00:15:45.276 { 00:15:45.276 "code": -32602, 00:15:45.276 "message": "Invalid parameters" 00:15:45.276 } 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:45.276 Adding namespace failed - expected result. 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:45.276 test case2: host connect to nvmf target in multiple paths 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:45.276 [2024-07-12 19:07:47.710132] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.276 19:07:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:46.655 19:07:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:47.594 19:07:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:47.594 19:07:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:47.594 19:07:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:47.594 19:07:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:47.594 19:07:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:50.130 19:07:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:50.130 19:07:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:50.130 19:07:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:50.130 19:07:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:50.131 19:07:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:50.131 19:07:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:50.131 19:07:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:50.131 [global] 00:15:50.131 thread=1 00:15:50.131 invalidate=1 00:15:50.131 rw=write 00:15:50.131 time_based=1 00:15:50.131 runtime=1 00:15:50.131 ioengine=libaio 00:15:50.131 direct=1 00:15:50.131 bs=4096 00:15:50.131 iodepth=1 00:15:50.131 norandommap=0 00:15:50.131 numjobs=1 00:15:50.131 00:15:50.131 verify_dump=1 00:15:50.131 verify_backlog=512 00:15:50.131 verify_state_save=0 00:15:50.131 do_verify=1 00:15:50.131 verify=crc32c-intel 00:15:50.131 [job0] 00:15:50.131 filename=/dev/nvme0n1 00:15:50.131 Could not set queue depth (nvme0n1) 00:15:50.131 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:50.131 fio-3.35 00:15:50.131 Starting 1 thread 00:15:51.509 00:15:51.509 job0: (groupid=0, jobs=1): err= 0: pid=284578: Fri Jul 12 19:07:53 2024 00:15:51.509 read: IOPS=21, BW=86.4KiB/s (88.4kB/s)(88.0KiB/1019msec) 00:15:51.509 slat (nsec): min=9958, max=22495, avg=20439.36, stdev=2395.70 00:15:51.509 clat (usec): min=40794, max=41057, avg=40962.98, stdev=58.13 00:15:51.509 lat (usec): min=40804, max=41079, avg=40983.41, stdev=59.79 00:15:51.509 clat percentiles (usec): 00:15:51.509 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:51.509 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:51.509 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:51.509 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:51.509 | 99.99th=[41157] 00:15:51.509 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:15:51.509 slat (nsec): min=10172, max=46966, avg=11594.67, stdev=2604.16 00:15:51.509 clat (usec): min=126, max=312, avg=213.26, stdev=40.96 00:15:51.509 lat (usec): min=136, max=359, avg=224.85, stdev=41.10 00:15:51.509 clat percentiles (usec): 00:15:51.509 | 1.00th=[ 135], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:15:51.509 | 30.00th=[ 167], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 241], 00:15:51.509 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 243], 95.00th=[ 245], 00:15:51.509 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 314], 99.95th=[ 314], 00:15:51.509 | 99.99th=[ 314] 00:15:51.509 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:51.509 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:51.509 lat (usec) : 250=94.76%, 500=1.12% 00:15:51.509 lat (msec) : 50=4.12% 00:15:51.509 cpu : usr=0.69%, sys=0.59%, ctx=534, majf=0, minf=2 00:15:51.509 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:51.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.509 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.509 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:51.509 00:15:51.509 Run status group 0 (all jobs): 00:15:51.509 READ: bw=86.4KiB/s (88.4kB/s), 86.4KiB/s-86.4KiB/s (88.4kB/s-88.4kB/s), io=88.0KiB (90.1kB), run=1019-1019msec 00:15:51.509 WRITE: bw=2010KiB/s (2058kB/s), 2010KiB/s-2010KiB/s (2058kB/s-2058kB/s), io=2048KiB (2097kB), run=1019-1019msec 00:15:51.509 00:15:51.509 Disk stats (read/write): 00:15:51.509 nvme0n1: ios=69/512, merge=0/0, ticks=795/101, in_queue=896, util=91.08% 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:51.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.509 rmmod nvme_tcp 00:15:51.509 rmmod nvme_fabrics 00:15:51.509 rmmod nvme_keyring 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 283496 ']' 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 283496 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 283496 ']' 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 283496 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 283496 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 283496' 00:15:51.509 killing process with pid 283496 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 283496 00:15:51.509 19:07:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 283496 00:15:51.768 19:07:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:51.768 19:07:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:51.768 19:07:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:51.768 19:07:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.768 19:07:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:51.768 19:07:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.768 19:07:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.768 19:07:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.673 19:07:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:53.673 00:15:53.673 real 0m15.378s 00:15:53.673 user 0m35.559s 00:15:53.673 sys 0m5.195s 00:15:53.673 19:07:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.673 19:07:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:53.673 ************************************ 00:15:53.673 END TEST nvmf_nmic 00:15:53.673 ************************************ 00:15:53.932 19:07:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:53.932 19:07:56 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:53.932 19:07:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:53.932 19:07:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.932 19:07:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.932 ************************************ 00:15:53.932 START TEST nvmf_fio_target 00:15:53.932 ************************************ 00:15:53.932 19:07:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:53.932 * Looking for test storage... 00:15:53.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.932 19:07:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.932 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:53.932 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.932 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.932 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.932 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:53.933 19:07:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:00.503 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:00.503 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:00.503 Found net devices under 0000:86:00.0: cvl_0_0 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:00.503 Found net devices under 0000:86:00.1: cvl_0_1 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:00.503 19:08:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:00.503 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:00.503 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:00.503 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:00.503 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:00.503 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:00.503 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:00.503 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:00.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:16:00.503 00:16:00.503 --- 10.0.0.2 ping statistics --- 00:16:00.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.503 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:16:00.503 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:00.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:16:00.503 00:16:00.503 --- 10.0.0.1 ping statistics --- 00:16:00.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.503 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:16:00.503 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.503 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:00.503 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:00.503 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.503 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:00.503 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=288324 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 288324 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 288324 ']' 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.504 19:08:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.504 [2024-07-12 19:08:02.227949] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:16:00.504 [2024-07-12 19:08:02.227991] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.504 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.504 [2024-07-12 19:08:02.298292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:00.504 [2024-07-12 19:08:02.372162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.504 [2024-07-12 19:08:02.372203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.504 [2024-07-12 19:08:02.372211] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.504 [2024-07-12 19:08:02.372216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.504 [2024-07-12 19:08:02.372222] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.504 [2024-07-12 19:08:02.372277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.504 [2024-07-12 19:08:02.372318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.504 [2024-07-12 19:08:02.372427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.504 [2024-07-12 19:08:02.372428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:00.504 19:08:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.504 19:08:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:00.504 19:08:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:00.504 19:08:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:00.504 19:08:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.763 19:08:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.763 19:08:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:00.763 [2024-07-12 19:08:03.223608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.763 19:08:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:01.024 19:08:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:01.024 19:08:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:01.284 19:08:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:01.284 19:08:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:01.284 19:08:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:01.284 19:08:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:01.543 19:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:01.543 19:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:01.802 19:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:02.061 19:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:02.061 19:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:02.061 19:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:02.061 19:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:02.321 19:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:02.321 19:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:02.580 19:08:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:02.580 19:08:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:02.580 19:08:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:02.840 19:08:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:02.840 19:08:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:03.099 19:08:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:03.358 [2024-07-12 19:08:05.673197] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.358 19:08:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:03.358 19:08:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:03.618 19:08:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:04.998 19:08:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:04.998 19:08:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:16:04.998 19:08:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.998 19:08:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:16:04.998 19:08:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:16:04.998 19:08:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:16:06.906 19:08:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:06.906 19:08:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:06.906 19:08:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:06.906 19:08:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:16:06.906 19:08:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:06.906 19:08:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:16:06.906 19:08:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:06.906 [global] 00:16:06.906 thread=1 00:16:06.906 invalidate=1 00:16:06.906 rw=write 00:16:06.906 time_based=1 00:16:06.906 runtime=1 00:16:06.906 ioengine=libaio 00:16:06.906 direct=1 00:16:06.906 bs=4096 00:16:06.906 iodepth=1 00:16:06.906 norandommap=0 00:16:06.906 numjobs=1 00:16:06.906 00:16:06.906 verify_dump=1 00:16:06.906 verify_backlog=512 00:16:06.906 verify_state_save=0 00:16:06.906 do_verify=1 00:16:06.906 verify=crc32c-intel 00:16:06.906 [job0] 00:16:06.906 filename=/dev/nvme0n1 00:16:06.906 [job1] 00:16:06.906 filename=/dev/nvme0n2 00:16:06.906 [job2] 00:16:06.906 filename=/dev/nvme0n3 00:16:06.906 [job3] 00:16:06.906 filename=/dev/nvme0n4 00:16:06.906 Could not set queue depth (nvme0n1) 00:16:06.906 Could not set queue depth (nvme0n2) 00:16:06.906 Could not set queue depth (nvme0n3) 00:16:06.906 Could not set queue depth (nvme0n4) 00:16:07.165 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:07.165 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:07.165 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:07.165 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:07.165 fio-3.35 00:16:07.165 Starting 4 threads 00:16:08.544 00:16:08.544 job0: (groupid=0, jobs=1): err= 0: pid=289710: Fri Jul 12 19:08:10 2024 00:16:08.544 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:08.544 slat (nsec): min=3689, max=14928, avg=6617.82, stdev=1461.54 00:16:08.544 clat (usec): min=192, max=41044, avg=1519.86, stdev=6997.64 00:16:08.544 lat (usec): min=200, max=41054, avg=1526.47, stdev=6997.98 00:16:08.544 clat percentiles (usec): 00:16:08.544 | 1.00th=[ 204], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 229], 00:16:08.544 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:16:08.544 | 70.00th=[ 255], 80.00th=[ 277], 90.00th=[ 416], 95.00th=[ 515], 00:16:08.544 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:08.544 | 99.99th=[41157] 00:16:08.544 write: IOPS=804, BW=3217KiB/s (3294kB/s)(3220KiB/1001msec); 0 zone resets 00:16:08.544 slat (usec): min=4, max=38633, avg=67.63, stdev=1398.23 00:16:08.544 clat (usec): min=121, max=280, avg=198.29, stdev=43.71 00:16:08.544 lat (usec): min=128, max=38857, avg=265.92, stdev=1400.09 00:16:08.544 clat percentiles (usec): 00:16:08.544 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 151], 00:16:08.544 | 30.00th=[ 157], 40.00th=[ 176], 50.00th=[ 196], 60.00th=[ 239], 00:16:08.544 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 245], 00:16:08.544 | 99.00th=[ 258], 99.50th=[ 262], 99.90th=[ 281], 99.95th=[ 281], 00:16:08.544 | 99.99th=[ 281] 00:16:08.544 bw ( KiB/s): min= 4096, max= 4096, per=25.34%, avg=4096.00, stdev= 0.00, samples=1 00:16:08.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:08.544 lat (usec) : 250=84.28%, 500=13.21%, 750=1.29% 00:16:08.544 lat (msec) : 50=1.21% 00:16:08.544 cpu : usr=0.30%, sys=1.20%, ctx=1322, majf=0, minf=1 00:16:08.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.544 issued rwts: total=512,805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.544 job1: (groupid=0, jobs=1): err= 0: pid=289742: Fri Jul 12 19:08:10 2024 00:16:08.544 read: IOPS=22, BW=88.4KiB/s (90.5kB/s)(92.0KiB/1041msec) 00:16:08.544 slat (nsec): min=9527, max=23827, avg=21953.91, stdev=3484.02 00:16:08.544 clat (usec): min=40703, max=42955, avg=41045.78, stdev=423.42 00:16:08.544 lat (usec): min=40712, max=42978, avg=41067.74, stdev=423.90 00:16:08.544 clat percentiles (usec): 00:16:08.544 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:08.544 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:08.544 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:08.544 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:08.544 | 99.99th=[42730] 00:16:08.544 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:16:08.544 slat (usec): min=9, max=10029, avg=31.18, stdev=442.73 00:16:08.544 clat (usec): min=119, max=300, avg=149.43, stdev=17.80 00:16:08.544 lat (usec): min=130, max=10329, avg=180.61, stdev=449.74 00:16:08.544 clat percentiles (usec): 00:16:08.544 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 137], 00:16:08.544 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 149], 00:16:08.544 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 174], 95.00th=[ 186], 00:16:08.544 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 302], 99.95th=[ 302], 00:16:08.544 | 99.99th=[ 302] 00:16:08.544 bw ( KiB/s): min= 4096, max= 4096, per=25.34%, avg=4096.00, stdev= 0.00, samples=1 00:16:08.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:08.544 lat (usec) : 250=95.51%, 500=0.19% 00:16:08.544 lat (msec) : 50=4.30% 00:16:08.544 cpu : usr=0.38%, sys=0.48%, ctx=538, majf=0, minf=1 00:16:08.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.544 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.544 job2: (groupid=0, jobs=1): err= 0: pid=289778: Fri Jul 12 19:08:10 2024 00:16:08.544 read: IOPS=1242, BW=4971KiB/s (5090kB/s)(4976KiB/1001msec) 00:16:08.545 slat (nsec): min=3879, max=24414, avg=7229.83, stdev=1920.75 00:16:08.545 clat (usec): min=177, max=41305, avg=534.24, stdev=3461.20 00:16:08.545 lat (usec): min=185, max=41312, avg=541.47, stdev=3461.39 00:16:08.545 clat percentiles (usec): 00:16:08.545 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:16:08.545 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 235], 60.00th=[ 243], 00:16:08.545 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 306], 00:16:08.545 | 99.00th=[ 523], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:08.545 | 99.99th=[41157] 00:16:08.545 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:08.545 slat (usec): min=9, max=9042, avg=16.51, stdev=230.45 00:16:08.545 clat (usec): min=116, max=340, avg=191.04, stdev=37.32 00:16:08.545 lat (usec): min=127, max=9316, avg=207.55, stdev=235.54 00:16:08.545 clat percentiles (usec): 00:16:08.545 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 141], 20.00th=[ 167], 00:16:08.545 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:16:08.545 | 70.00th=[ 198], 80.00th=[ 233], 90.00th=[ 247], 95.00th=[ 253], 00:16:08.545 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 314], 99.95th=[ 343], 00:16:08.545 | 99.99th=[ 343] 00:16:08.545 bw ( KiB/s): min= 8192, max= 8192, per=50.69%, avg=8192.00, stdev= 0.00, samples=1 00:16:08.545 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:08.545 lat (usec) : 250=83.71%, 500=15.58%, 750=0.40% 00:16:08.545 lat (msec) : 50=0.32% 00:16:08.545 cpu : usr=1.20%, sys=2.70%, ctx=2782, majf=0, minf=1 00:16:08.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.545 issued rwts: total=1244,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.545 job3: (groupid=0, jobs=1): err= 0: pid=289790: Fri Jul 12 19:08:10 2024 00:16:08.545 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:08.545 slat (nsec): min=7240, max=43566, avg=8358.07, stdev=2363.24 00:16:08.545 clat (usec): min=166, max=41204, avg=734.64, stdev=4557.46 00:16:08.545 lat (usec): min=173, max=41224, avg=743.00, stdev=4557.94 00:16:08.545 clat percentiles (usec): 00:16:08.545 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:16:08.545 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:16:08.545 | 70.00th=[ 221], 80.00th=[ 235], 90.00th=[ 260], 95.00th=[ 269], 00:16:08.545 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:08.545 | 99.99th=[41157] 00:16:08.545 write: IOPS=1351, BW=5407KiB/s (5536kB/s)(5412KiB/1001msec); 0 zone resets 00:16:08.545 slat (nsec): min=10238, max=47624, avg=11947.25, stdev=2249.70 00:16:08.545 clat (usec): min=122, max=247, avg=159.46, stdev=23.35 00:16:08.545 lat (usec): min=134, max=278, avg=171.40, stdev=23.72 00:16:08.545 clat percentiles (usec): 00:16:08.545 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:16:08.545 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 163], 00:16:08.545 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 198], 00:16:08.545 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 247], 99.95th=[ 247], 00:16:08.545 | 99.99th=[ 247] 00:16:08.545 bw ( KiB/s): min= 4096, max= 4096, per=25.34%, avg=4096.00, stdev= 0.00, samples=1 00:16:08.545 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:08.545 lat (usec) : 250=93.02%, 500=6.39% 00:16:08.545 lat (msec) : 2=0.04%, 50=0.55% 00:16:08.545 cpu : usr=2.00%, sys=3.90%, ctx=2377, majf=0, minf=2 00:16:08.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.545 issued rwts: total=1024,1353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.545 00:16:08.545 Run status group 0 (all jobs): 00:16:08.545 READ: bw=10.5MiB/s (11.0MB/s), 88.4KiB/s-4971KiB/s (90.5kB/s-5090kB/s), io=10.9MiB (11.5MB), run=1001-1041msec 00:16:08.545 WRITE: bw=15.8MiB/s (16.5MB/s), 1967KiB/s-6138KiB/s (2015kB/s-6285kB/s), io=16.4MiB (17.2MB), run=1001-1041msec 00:16:08.545 00:16:08.545 Disk stats (read/write): 00:16:08.545 nvme0n1: ios=379/512, merge=0/0, ticks=1492/112, in_queue=1604, util=87.07% 00:16:08.545 nvme0n2: ios=68/512, merge=0/0, ticks=1088/77, in_queue=1165, util=91.02% 00:16:08.545 nvme0n3: ios=1046/1380, merge=0/0, ticks=1349/259, in_queue=1608, util=95.18% 00:16:08.545 nvme0n4: ios=778/1024, merge=0/0, ticks=673/149, in_queue=822, util=93.09% 00:16:08.545 19:08:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:08.545 [global] 00:16:08.545 thread=1 00:16:08.545 invalidate=1 00:16:08.545 rw=randwrite 00:16:08.545 time_based=1 00:16:08.545 runtime=1 00:16:08.545 ioengine=libaio 00:16:08.545 direct=1 00:16:08.545 bs=4096 00:16:08.545 iodepth=1 00:16:08.545 norandommap=0 00:16:08.545 numjobs=1 00:16:08.545 00:16:08.545 verify_dump=1 00:16:08.545 verify_backlog=512 00:16:08.545 verify_state_save=0 00:16:08.545 do_verify=1 00:16:08.545 verify=crc32c-intel 00:16:08.545 [job0] 00:16:08.545 filename=/dev/nvme0n1 00:16:08.545 [job1] 00:16:08.545 filename=/dev/nvme0n2 00:16:08.545 [job2] 00:16:08.545 filename=/dev/nvme0n3 00:16:08.545 [job3] 00:16:08.545 filename=/dev/nvme0n4 00:16:08.545 Could not set queue depth (nvme0n1) 00:16:08.545 Could not set queue depth (nvme0n2) 00:16:08.545 Could not set queue depth (nvme0n3) 00:16:08.545 Could not set queue depth (nvme0n4) 00:16:08.804 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.805 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.805 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.805 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:08.805 fio-3.35 00:16:08.805 Starting 4 threads 00:16:10.215 00:16:10.215 job0: (groupid=0, jobs=1): err= 0: pid=290226: Fri Jul 12 19:08:12 2024 00:16:10.215 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:10.215 slat (nsec): min=7196, max=30829, avg=8364.00, stdev=1222.22 00:16:10.215 clat (usec): min=189, max=700, avg=261.22, stdev=63.88 00:16:10.215 lat (usec): min=196, max=708, avg=269.59, stdev=63.95 00:16:10.215 clat percentiles (usec): 00:16:10.215 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:16:10.215 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:16:10.215 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 355], 95.00th=[ 429], 00:16:10.215 | 99.00th=[ 519], 99.50th=[ 537], 99.90th=[ 586], 99.95th=[ 635], 00:16:10.215 | 99.99th=[ 701] 00:16:10.215 write: IOPS=2471, BW=9886KiB/s (10.1MB/s)(9896KiB/1001msec); 0 zone resets 00:16:10.215 slat (nsec): min=9111, max=38454, avg=12049.35, stdev=1463.56 00:16:10.215 clat (usec): min=123, max=226, avg=162.70, stdev=10.99 00:16:10.215 lat (usec): min=135, max=252, avg=174.75, stdev=11.19 00:16:10.215 clat percentiles (usec): 00:16:10.215 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 153], 00:16:10.215 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:16:10.215 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 182], 00:16:10.215 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 206], 99.95th=[ 215], 00:16:10.215 | 99.99th=[ 227] 00:16:10.215 bw ( KiB/s): min=10674, max=10674, per=38.55%, avg=10674.00, stdev= 0.00, samples=1 00:16:10.215 iops : min= 2668, max= 2668, avg=2668.00, stdev= 0.00, samples=1 00:16:10.215 lat (usec) : 250=86.86%, 500=12.32%, 750=0.82% 00:16:10.215 cpu : usr=4.00%, sys=7.30%, ctx=4524, majf=0, minf=1 00:16:10.215 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.215 issued rwts: total=2048,2474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.215 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.215 job1: (groupid=0, jobs=1): err= 0: pid=290240: Fri Jul 12 19:08:12 2024 00:16:10.215 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:16:10.215 slat (nsec): min=9352, max=23766, avg=22392.05, stdev=2929.95 00:16:10.215 clat (usec): min=40851, max=41987, avg=41070.84, stdev=305.31 00:16:10.215 lat (usec): min=40875, max=42010, avg=41093.23, stdev=305.04 00:16:10.215 clat percentiles (usec): 00:16:10.215 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:10.215 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:10.215 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:16:10.215 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:10.215 | 99.99th=[42206] 00:16:10.215 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:16:10.215 slat (nsec): min=9313, max=47246, avg=10606.15, stdev=2436.67 00:16:10.215 clat (usec): min=132, max=392, avg=216.30, stdev=27.54 00:16:10.215 lat (usec): min=142, max=439, avg=226.91, stdev=28.13 00:16:10.215 clat percentiles (usec): 00:16:10.215 | 1.00th=[ 145], 5.00th=[ 157], 10.00th=[ 176], 20.00th=[ 202], 00:16:10.215 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 227], 00:16:10.215 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 249], 00:16:10.215 | 99.00th=[ 269], 99.50th=[ 293], 99.90th=[ 392], 99.95th=[ 392], 00:16:10.215 | 99.99th=[ 392] 00:16:10.215 bw ( KiB/s): min= 4087, max= 4087, per=14.76%, avg=4087.00, stdev= 0.00, samples=1 00:16:10.215 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:16:10.215 lat (usec) : 250=91.95%, 500=3.93% 00:16:10.215 lat (msec) : 50=4.12% 00:16:10.215 cpu : usr=0.29%, sys=0.49%, ctx=536, majf=0, minf=1 00:16:10.215 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.215 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.215 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.215 job2: (groupid=0, jobs=1): err= 0: pid=290258: Fri Jul 12 19:08:12 2024 00:16:10.215 read: IOPS=1951, BW=7804KiB/s (7991kB/s)(7812KiB/1001msec) 00:16:10.215 slat (nsec): min=7253, max=20302, avg=8200.95, stdev=1001.14 00:16:10.215 clat (usec): min=197, max=583, avg=294.06, stdev=69.91 00:16:10.215 lat (usec): min=205, max=591, avg=302.26, stdev=69.97 00:16:10.215 clat percentiles (usec): 00:16:10.215 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 243], 20.00th=[ 255], 00:16:10.215 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:16:10.215 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 371], 95.00th=[ 494], 00:16:10.215 | 99.00th=[ 510], 99.50th=[ 515], 99.90th=[ 570], 99.95th=[ 586], 00:16:10.215 | 99.99th=[ 586] 00:16:10.215 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:10.215 slat (nsec): min=10255, max=44255, avg=11680.79, stdev=1681.89 00:16:10.215 clat (usec): min=127, max=312, avg=182.25, stdev=28.67 00:16:10.215 lat (usec): min=138, max=331, avg=193.93, stdev=28.79 00:16:10.215 clat percentiles (usec): 00:16:10.215 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 159], 00:16:10.215 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 182], 00:16:10.215 | 70.00th=[ 192], 80.00th=[ 212], 90.00th=[ 227], 95.00th=[ 235], 00:16:10.215 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 289], 99.95th=[ 293], 00:16:10.215 | 99.99th=[ 314] 00:16:10.215 bw ( KiB/s): min= 8247, max= 8247, per=29.78%, avg=8247.00, stdev= 0.00, samples=1 00:16:10.215 iops : min= 2061, max= 2061, avg=2061.00, stdev= 0.00, samples=1 00:16:10.215 lat (usec) : 250=58.19%, 500=40.46%, 750=1.35% 00:16:10.215 cpu : usr=3.30%, sys=6.50%, ctx=4001, majf=0, minf=2 00:16:10.215 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.215 issued rwts: total=1953,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.215 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.215 job3: (groupid=0, jobs=1): err= 0: pid=290263: Fri Jul 12 19:08:12 2024 00:16:10.215 read: IOPS=1846, BW=7385KiB/s (7562kB/s)(7392KiB/1001msec) 00:16:10.215 slat (nsec): min=6264, max=25449, avg=7178.09, stdev=844.97 00:16:10.215 clat (usec): min=205, max=572, avg=311.37, stdev=85.62 00:16:10.215 lat (usec): min=212, max=579, avg=318.55, stdev=85.65 00:16:10.215 clat percentiles (usec): 00:16:10.215 | 1.00th=[ 227], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 260], 00:16:10.215 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:16:10.215 | 70.00th=[ 293], 80.00th=[ 343], 90.00th=[ 494], 95.00th=[ 506], 00:16:10.215 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 570], 99.95th=[ 570], 00:16:10.215 | 99.99th=[ 570] 00:16:10.215 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:10.215 slat (nsec): min=9235, max=46129, avg=10319.84, stdev=1369.65 00:16:10.215 clat (usec): min=119, max=504, avg=186.08, stdev=38.34 00:16:10.215 lat (usec): min=129, max=514, avg=196.40, stdev=38.40 00:16:10.215 clat percentiles (usec): 00:16:10.215 | 1.00th=[ 135], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:16:10.215 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 182], 00:16:10.215 | 70.00th=[ 192], 80.00th=[ 212], 90.00th=[ 235], 95.00th=[ 249], 00:16:10.215 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 351], 99.95th=[ 359], 00:16:10.215 | 99.99th=[ 506] 00:16:10.215 bw ( KiB/s): min= 8175, max= 8175, per=29.52%, avg=8175.00, stdev= 0.00, samples=1 00:16:10.215 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:16:10.215 lat (usec) : 250=54.18%, 500=42.17%, 750=3.64% 00:16:10.215 cpu : usr=2.40%, sys=3.10%, ctx=3897, majf=0, minf=1 00:16:10.215 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.215 issued rwts: total=1848,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.215 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.215 00:16:10.215 Run status group 0 (all jobs): 00:16:10.215 READ: bw=22.4MiB/s (23.5MB/s), 86.0KiB/s-8184KiB/s (88.1kB/s-8380kB/s), io=22.9MiB (24.0MB), run=1001-1023msec 00:16:10.215 WRITE: bw=27.0MiB/s (28.4MB/s), 2002KiB/s-9886KiB/s (2050kB/s-10.1MB/s), io=27.7MiB (29.0MB), run=1001-1023msec 00:16:10.215 00:16:10.215 Disk stats (read/write): 00:16:10.215 nvme0n1: ios=1868/2048, merge=0/0, ticks=602/317, in_queue=919, util=85.97% 00:16:10.215 nvme0n2: ios=66/512, merge=0/0, ticks=1539/107, in_queue=1646, util=89.96% 00:16:10.215 nvme0n3: ios=1593/1962, merge=0/0, ticks=497/331, in_queue=828, util=94.50% 00:16:10.215 nvme0n4: ios=1559/1760, merge=0/0, ticks=1379/323, in_queue=1702, util=94.04% 00:16:10.215 19:08:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:10.215 [global] 00:16:10.215 thread=1 00:16:10.215 invalidate=1 00:16:10.216 rw=write 00:16:10.216 time_based=1 00:16:10.216 runtime=1 00:16:10.216 ioengine=libaio 00:16:10.216 direct=1 00:16:10.216 bs=4096 00:16:10.216 iodepth=128 00:16:10.216 norandommap=0 00:16:10.216 numjobs=1 00:16:10.216 00:16:10.216 verify_dump=1 00:16:10.216 verify_backlog=512 00:16:10.216 verify_state_save=0 00:16:10.216 do_verify=1 00:16:10.216 verify=crc32c-intel 00:16:10.216 [job0] 00:16:10.216 filename=/dev/nvme0n1 00:16:10.216 [job1] 00:16:10.216 filename=/dev/nvme0n2 00:16:10.216 [job2] 00:16:10.216 filename=/dev/nvme0n3 00:16:10.216 [job3] 00:16:10.216 filename=/dev/nvme0n4 00:16:10.216 Could not set queue depth (nvme0n1) 00:16:10.216 Could not set queue depth (nvme0n2) 00:16:10.216 Could not set queue depth (nvme0n3) 00:16:10.216 Could not set queue depth (nvme0n4) 00:16:10.481 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:10.481 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:10.481 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:10.481 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:10.481 fio-3.35 00:16:10.481 Starting 4 threads 00:16:11.863 00:16:11.863 job0: (groupid=0, jobs=1): err= 0: pid=290643: Fri Jul 12 19:08:14 2024 00:16:11.863 read: IOPS=3660, BW=14.3MiB/s (15.0MB/s)(15.0MiB/1048msec) 00:16:11.863 slat (nsec): min=1311, max=20327k, avg=130583.42, stdev=930903.75 00:16:11.863 clat (usec): min=2574, max=54923, avg=16289.70, stdev=10725.60 00:16:11.863 lat (usec): min=2582, max=65068, avg=16420.29, stdev=10785.40 00:16:11.863 clat percentiles (usec): 00:16:11.863 | 1.00th=[ 5997], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[10159], 00:16:11.863 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12256], 60.00th=[13042], 00:16:11.863 | 70.00th=[13829], 80.00th=[19006], 90.00th=[31327], 95.00th=[45351], 00:16:11.863 | 99.00th=[54264], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:16:11.863 | 99.99th=[54789] 00:16:11.863 write: IOPS=3908, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1048msec); 0 zone resets 00:16:11.863 slat (usec): min=2, max=15758, avg=111.88, stdev=672.27 00:16:11.863 clat (usec): min=1150, max=49642, avg=17148.31, stdev=11005.18 00:16:11.863 lat (usec): min=1160, max=49647, avg=17260.19, stdev=11078.13 00:16:11.863 clat percentiles (usec): 00:16:11.863 | 1.00th=[ 3556], 5.00th=[ 5473], 10.00th=[ 8586], 20.00th=[ 9503], 00:16:11.863 | 30.00th=[10028], 40.00th=[11338], 50.00th=[13042], 60.00th=[15664], 00:16:11.863 | 70.00th=[17433], 80.00th=[22152], 90.00th=[38011], 95.00th=[43254], 00:16:11.863 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:16:11.863 | 99.99th=[49546] 00:16:11.863 bw ( KiB/s): min=16376, max=16392, per=24.43%, avg=16384.00, stdev=11.31, samples=2 00:16:11.863 iops : min= 4094, max= 4098, avg=4096.00, stdev= 2.83, samples=2 00:16:11.863 lat (msec) : 2=0.03%, 4=1.19%, 10=21.67%, 20=54.45%, 50=21.18% 00:16:11.863 lat (msec) : 100=1.49% 00:16:11.863 cpu : usr=3.06%, sys=4.68%, ctx=375, majf=0, minf=1 00:16:11.863 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:11.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.863 issued rwts: total=3836,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.863 job1: (groupid=0, jobs=1): err= 0: pid=290644: Fri Jul 12 19:08:14 2024 00:16:11.863 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:16:11.863 slat (nsec): min=1075, max=12415k, avg=96748.70, stdev=703321.41 00:16:11.863 clat (usec): min=3354, max=47703, avg=12060.57, stdev=5205.23 00:16:11.863 lat (usec): min=3866, max=47711, avg=12157.32, stdev=5263.71 00:16:11.863 clat percentiles (usec): 00:16:11.863 | 1.00th=[ 6587], 5.00th=[ 7504], 10.00th=[ 8094], 20.00th=[ 8717], 00:16:11.863 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[11469], 00:16:11.863 | 70.00th=[13042], 80.00th=[15139], 90.00th=[17433], 95.00th=[22152], 00:16:11.863 | 99.00th=[33424], 99.50th=[40633], 99.90th=[46924], 99.95th=[47449], 00:16:11.863 | 99.99th=[47449] 00:16:11.863 write: IOPS=5410, BW=21.1MiB/s (22.2MB/s)(21.3MiB/1008msec); 0 zone resets 00:16:11.863 slat (usec): min=2, max=12059, avg=85.32, stdev=560.43 00:16:11.863 clat (usec): min=1094, max=49318, avg=12127.38, stdev=7241.08 00:16:11.863 lat (usec): min=1103, max=49324, avg=12212.70, stdev=7296.90 00:16:11.863 clat percentiles (usec): 00:16:11.863 | 1.00th=[ 3195], 5.00th=[ 5407], 10.00th=[ 6980], 20.00th=[ 8717], 00:16:11.863 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:16:11.863 | 70.00th=[11469], 80.00th=[15139], 90.00th=[18220], 95.00th=[28181], 00:16:11.863 | 99.00th=[43254], 99.50th=[46924], 99.90th=[49546], 99.95th=[49546], 00:16:11.863 | 99.99th=[49546] 00:16:11.863 bw ( KiB/s): min=16816, max=25800, per=31.78%, avg=21308.00, stdev=6352.65, samples=2 00:16:11.863 iops : min= 4204, max= 6450, avg=5327.00, stdev=1588.16, samples=2 00:16:11.863 lat (msec) : 2=0.18%, 4=1.02%, 10=53.80%, 20=37.14%, 50=7.86% 00:16:11.863 cpu : usr=4.57%, sys=4.77%, ctx=522, majf=0, minf=1 00:16:11.863 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:11.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.863 issued rwts: total=5120,5454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.863 job2: (groupid=0, jobs=1): err= 0: pid=290645: Fri Jul 12 19:08:14 2024 00:16:11.863 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:16:11.863 slat (nsec): min=1126, max=10092k, avg=93895.59, stdev=591459.37 00:16:11.863 clat (usec): min=3658, max=47106, avg=11950.68, stdev=5076.62 00:16:11.863 lat (usec): min=3664, max=47112, avg=12044.58, stdev=5097.12 00:16:11.863 clat percentiles (usec): 00:16:11.863 | 1.00th=[ 5604], 5.00th=[ 6587], 10.00th=[ 7898], 20.00th=[ 9110], 00:16:11.863 | 30.00th=[10159], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:16:11.863 | 70.00th=[11994], 80.00th=[13698], 90.00th=[17171], 95.00th=[17957], 00:16:11.863 | 99.00th=[40633], 99.50th=[45351], 99.90th=[46400], 99.95th=[46924], 00:16:11.863 | 99.99th=[46924] 00:16:11.863 write: IOPS=4793, BW=18.7MiB/s (19.6MB/s)(18.9MiB/1007msec); 0 zone resets 00:16:11.863 slat (nsec): min=1920, max=25466k, avg=112908.82, stdev=782571.32 00:16:11.863 clat (usec): min=1986, max=43653, avg=15017.18, stdev=7458.15 00:16:11.863 lat (usec): min=2778, max=43674, avg=15130.09, stdev=7499.82 00:16:11.864 clat percentiles (usec): 00:16:11.864 | 1.00th=[ 5342], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10552], 00:16:11.864 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[12911], 00:16:11.864 | 70.00th=[16909], 80.00th=[18482], 90.00th=[26084], 95.00th=[30540], 00:16:11.864 | 99.00th=[41681], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:16:11.864 | 99.99th=[43779] 00:16:11.864 bw ( KiB/s): min=18208, max=19384, per=28.03%, avg=18796.00, stdev=831.56, samples=2 00:16:11.864 iops : min= 4552, max= 4846, avg=4699.00, stdev=207.89, samples=2 00:16:11.864 lat (msec) : 2=0.01%, 4=0.45%, 10=21.12%, 20=68.33%, 50=10.09% 00:16:11.864 cpu : usr=3.38%, sys=4.17%, ctx=589, majf=0, minf=1 00:16:11.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:11.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.864 issued rwts: total=4608,4827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.864 job3: (groupid=0, jobs=1): err= 0: pid=290646: Fri Jul 12 19:08:14 2024 00:16:11.864 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:16:11.864 slat (nsec): min=1154, max=31905k, avg=170782.60, stdev=1499706.26 00:16:11.864 clat (usec): min=4889, max=90910, avg=24303.75, stdev=18034.33 00:16:11.864 lat (usec): min=4895, max=90933, avg=24474.53, stdev=18199.59 00:16:11.864 clat percentiles (usec): 00:16:11.864 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[11207], 00:16:11.864 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[19268], 00:16:11.864 | 70.00th=[21365], 80.00th=[39584], 90.00th=[54264], 95.00th=[68682], 00:16:11.864 | 99.00th=[73925], 99.50th=[73925], 99.90th=[84411], 99.95th=[89654], 00:16:11.864 | 99.99th=[90702] 00:16:11.864 write: IOPS=3162, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1009msec); 0 zone resets 00:16:11.864 slat (usec): min=2, max=22506, avg=118.65, stdev=838.33 00:16:11.864 clat (usec): min=1019, max=56969, avg=16760.16, stdev=10236.04 00:16:11.864 lat (usec): min=1029, max=56976, avg=16878.81, stdev=10290.41 00:16:11.864 clat percentiles (usec): 00:16:11.864 | 1.00th=[ 2737], 5.00th=[ 5735], 10.00th=[ 7635], 20.00th=[ 9765], 00:16:11.864 | 30.00th=[11338], 40.00th=[12911], 50.00th=[14222], 60.00th=[16188], 00:16:11.864 | 70.00th=[18220], 80.00th=[19530], 90.00th=[31065], 95.00th=[41681], 00:16:11.864 | 99.00th=[54789], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:16:11.864 | 99.99th=[56886] 00:16:11.864 bw ( KiB/s): min= 8472, max=16112, per=18.33%, avg=12292.00, stdev=5402.30, samples=2 00:16:11.864 iops : min= 2118, max= 4028, avg=3073.00, stdev=1350.57, samples=2 00:16:11.864 lat (msec) : 2=0.18%, 4=1.04%, 10=15.06%, 20=54.77%, 50=20.84% 00:16:11.864 lat (msec) : 100=8.13% 00:16:11.864 cpu : usr=2.68%, sys=2.58%, ctx=247, majf=0, minf=1 00:16:11.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:11.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.864 issued rwts: total=3072,3191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.864 00:16:11.864 Run status group 0 (all jobs): 00:16:11.864 READ: bw=62.0MiB/s (65.0MB/s), 11.9MiB/s-19.8MiB/s (12.5MB/s-20.8MB/s), io=65.0MiB (68.1MB), run=1007-1048msec 00:16:11.864 WRITE: bw=65.5MiB/s (68.7MB/s), 12.4MiB/s-21.1MiB/s (13.0MB/s-22.2MB/s), io=68.6MiB (72.0MB), run=1007-1048msec 00:16:11.864 00:16:11.864 Disk stats (read/write): 00:16:11.864 nvme0n1: ios=3474/3584, merge=0/0, ticks=49171/54835, in_queue=104006, util=99.80% 00:16:11.864 nvme0n2: ios=4145/4318, merge=0/0, ticks=49243/54700, in_queue=103943, util=87.92% 00:16:11.864 nvme0n3: ios=4139/4135, merge=0/0, ticks=35707/44301, in_queue=80008, util=98.96% 00:16:11.864 nvme0n4: ios=2460/2560, merge=0/0, ticks=36918/31925, in_queue=68843, util=90.90% 00:16:11.864 19:08:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:11.864 [global] 00:16:11.864 thread=1 00:16:11.864 invalidate=1 00:16:11.864 rw=randwrite 00:16:11.864 time_based=1 00:16:11.864 runtime=1 00:16:11.864 ioengine=libaio 00:16:11.864 direct=1 00:16:11.864 bs=4096 00:16:11.864 iodepth=128 00:16:11.864 norandommap=0 00:16:11.864 numjobs=1 00:16:11.864 00:16:11.864 verify_dump=1 00:16:11.864 verify_backlog=512 00:16:11.864 verify_state_save=0 00:16:11.864 do_verify=1 00:16:11.864 verify=crc32c-intel 00:16:11.864 [job0] 00:16:11.864 filename=/dev/nvme0n1 00:16:11.864 [job1] 00:16:11.864 filename=/dev/nvme0n2 00:16:11.864 [job2] 00:16:11.864 filename=/dev/nvme0n3 00:16:11.864 [job3] 00:16:11.864 filename=/dev/nvme0n4 00:16:11.864 Could not set queue depth (nvme0n1) 00:16:11.864 Could not set queue depth (nvme0n2) 00:16:11.864 Could not set queue depth (nvme0n3) 00:16:11.864 Could not set queue depth (nvme0n4) 00:16:11.864 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:11.864 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:11.864 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:11.864 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:11.864 fio-3.35 00:16:11.864 Starting 4 threads 00:16:13.242 00:16:13.242 job0: (groupid=0, jobs=1): err= 0: pid=291010: Fri Jul 12 19:08:15 2024 00:16:13.242 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:16:13.242 slat (nsec): min=1063, max=27633k, avg=144790.15, stdev=1151713.62 00:16:13.242 clat (usec): min=4694, max=66628, avg=17569.59, stdev=11718.76 00:16:13.242 lat (usec): min=4697, max=66633, avg=17714.38, stdev=11818.54 00:16:13.242 clat percentiles (usec): 00:16:13.242 | 1.00th=[ 6259], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10945], 00:16:13.242 | 30.00th=[11076], 40.00th=[11338], 50.00th=[12256], 60.00th=[13829], 00:16:13.242 | 70.00th=[17433], 80.00th=[21890], 90.00th=[38536], 95.00th=[46924], 00:16:13.242 | 99.00th=[62129], 99.50th=[66323], 99.90th=[66847], 99.95th=[66847], 00:16:13.242 | 99.99th=[66847] 00:16:13.242 write: IOPS=3358, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1005msec); 0 zone resets 00:16:13.242 slat (nsec): min=1789, max=31507k, avg=141958.97, stdev=1151552.08 00:16:13.242 clat (usec): min=893, max=85904, avg=21712.32, stdev=17765.58 00:16:13.242 lat (usec): min=1658, max=86789, avg=21854.28, stdev=17824.34 00:16:13.242 clat percentiles (usec): 00:16:13.242 | 1.00th=[ 1827], 5.00th=[ 3523], 10.00th=[ 4555], 20.00th=[ 8029], 00:16:13.242 | 30.00th=[ 8848], 40.00th=[12387], 50.00th=[16319], 60.00th=[19792], 00:16:13.242 | 70.00th=[23725], 80.00th=[37487], 90.00th=[51119], 95.00th=[57410], 00:16:13.242 | 99.00th=[71828], 99.50th=[72877], 99.90th=[81265], 99.95th=[81265], 00:16:13.242 | 99.99th=[85459] 00:16:13.242 bw ( KiB/s): min= 9592, max=16384, per=19.05%, avg=12988.00, stdev=4802.67, samples=2 00:16:13.242 iops : min= 2398, max= 4096, avg=3247.00, stdev=1200.67, samples=2 00:16:13.242 lat (usec) : 1000=0.02% 00:16:13.242 lat (msec) : 2=0.54%, 4=3.27%, 10=21.27%, 20=42.90%, 50=25.35% 00:16:13.242 lat (msec) : 100=6.65% 00:16:13.242 cpu : usr=2.29%, sys=2.39%, ctx=287, majf=0, minf=1 00:16:13.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:16:13.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.242 issued rwts: total=3072,3375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.242 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.242 job1: (groupid=0, jobs=1): err= 0: pid=291011: Fri Jul 12 19:08:15 2024 00:16:13.242 read: IOPS=3205, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1003msec) 00:16:13.242 slat (nsec): min=1489, max=48985k, avg=166247.93, stdev=1863908.79 00:16:13.242 clat (msec): min=2, max=108, avg=19.32, stdev=24.86 00:16:13.242 lat (msec): min=2, max=108, avg=19.48, stdev=25.00 00:16:13.242 clat percentiles (msec): 00:16:13.242 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:16:13.242 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 11], 00:16:13.242 | 70.00th=[ 13], 80.00th=[ 16], 90.00th=[ 52], 95.00th=[ 97], 00:16:13.242 | 99.00th=[ 109], 99.50th=[ 109], 99.90th=[ 109], 99.95th=[ 109], 00:16:13.242 | 99.99th=[ 109] 00:16:13.242 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:16:13.242 slat (usec): min=2, max=12451, avg=119.54, stdev=683.77 00:16:13.242 clat (msec): min=3, max=132, avg=17.97, stdev=21.55 00:16:13.242 lat (msec): min=3, max=132, avg=18.09, stdev=21.67 00:16:13.242 clat percentiles (msec): 00:16:13.242 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 9], 00:16:13.242 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:16:13.242 | 70.00th=[ 11], 80.00th=[ 19], 90.00th=[ 37], 95.00th=[ 62], 00:16:13.242 | 99.00th=[ 118], 99.50th=[ 127], 99.90th=[ 133], 99.95th=[ 133], 00:16:13.242 | 99.99th=[ 133] 00:16:13.242 bw ( KiB/s): min=14208, max=14464, per=21.03%, avg=14336.00, stdev=181.02, samples=2 00:16:13.242 iops : min= 3552, max= 3616, avg=3584.00, stdev=45.25, samples=2 00:16:13.242 lat (msec) : 4=0.88%, 10=61.10%, 20=20.21%, 50=8.60%, 100=5.52% 00:16:13.242 lat (msec) : 250=3.69% 00:16:13.242 cpu : usr=2.79%, sys=5.49%, ctx=332, majf=0, minf=1 00:16:13.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:13.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.242 issued rwts: total=3215,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.242 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.242 job2: (groupid=0, jobs=1): err= 0: pid=291012: Fri Jul 12 19:08:15 2024 00:16:13.242 read: IOPS=5103, BW=19.9MiB/s (20.9MB/s)(20.1MiB/1009msec) 00:16:13.242 slat (nsec): min=1328, max=12252k, avg=111024.69, stdev=767018.61 00:16:13.242 clat (usec): min=3301, max=95164, avg=11828.29, stdev=8255.49 00:16:13.242 lat (usec): min=3325, max=95170, avg=11939.31, stdev=8364.81 00:16:13.242 clat percentiles (usec): 00:16:13.242 | 1.00th=[ 4228], 5.00th=[ 7242], 10.00th=[ 8225], 20.00th=[ 8717], 00:16:13.242 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10683], 00:16:13.242 | 70.00th=[11863], 80.00th=[12911], 90.00th=[15139], 95.00th=[17433], 00:16:13.242 | 99.00th=[53740], 99.50th=[70779], 99.90th=[87557], 99.95th=[94897], 00:16:13.242 | 99.99th=[94897] 00:16:13.242 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:16:13.242 slat (usec): min=2, max=9753, avg=71.43, stdev=354.97 00:16:13.242 clat (usec): min=2247, max=95168, avg=11885.38, stdev=11121.22 00:16:13.242 lat (usec): min=2259, max=95178, avg=11956.82, stdev=11148.85 00:16:13.242 clat percentiles (usec): 00:16:13.242 | 1.00th=[ 3195], 5.00th=[ 4686], 10.00th=[ 6128], 20.00th=[ 8586], 00:16:13.242 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9241], 60.00th=[ 9372], 00:16:13.242 | 70.00th=[10683], 80.00th=[11469], 90.00th=[14222], 95.00th=[33424], 00:16:13.242 | 99.00th=[76022], 99.50th=[86508], 99.90th=[90702], 99.95th=[90702], 00:16:13.242 | 99.99th=[94897] 00:16:13.242 bw ( KiB/s): min=20464, max=23808, per=32.47%, avg=22136.00, stdev=2364.57, samples=2 00:16:13.242 iops : min= 5116, max= 5952, avg=5534.00, stdev=591.14, samples=2 00:16:13.242 lat (msec) : 4=1.91%, 10=57.52%, 20=35.43%, 50=3.45%, 100=1.69% 00:16:13.242 cpu : usr=3.37%, sys=5.06%, ctx=730, majf=0, minf=1 00:16:13.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:13.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.242 issued rwts: total=5149,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.242 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.243 job3: (groupid=0, jobs=1): err= 0: pid=291013: Fri Jul 12 19:08:15 2024 00:16:13.243 read: IOPS=4327, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1005msec) 00:16:13.243 slat (nsec): min=1369, max=9990.3k, avg=108140.83, stdev=740236.84 00:16:13.243 clat (usec): min=2253, max=35791, avg=12904.13, stdev=3830.15 00:16:13.243 lat (usec): min=4725, max=35798, avg=13012.27, stdev=3880.37 00:16:13.243 clat percentiles (usec): 00:16:13.243 | 1.00th=[ 5604], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[10028], 00:16:13.243 | 30.00th=[10683], 40.00th=[11469], 50.00th=[11863], 60.00th=[12911], 00:16:13.243 | 70.00th=[13566], 80.00th=[14353], 90.00th=[17433], 95.00th=[20317], 00:16:13.243 | 99.00th=[27657], 99.50th=[30016], 99.90th=[35914], 99.95th=[35914], 00:16:13.243 | 99.99th=[35914] 00:16:13.243 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:16:13.243 slat (usec): min=2, max=12684, avg=108.96, stdev=578.10 00:16:13.243 clat (usec): min=2354, max=37085, avg=15254.76, stdev=6624.80 00:16:13.243 lat (usec): min=2365, max=37094, avg=15363.72, stdev=6677.17 00:16:13.243 clat percentiles (usec): 00:16:13.243 | 1.00th=[ 3621], 5.00th=[ 6587], 10.00th=[ 8291], 20.00th=[10814], 00:16:13.243 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12780], 60.00th=[15008], 00:16:13.243 | 70.00th=[18744], 80.00th=[21103], 90.00th=[24511], 95.00th=[27919], 00:16:13.243 | 99.00th=[34341], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:16:13.243 | 99.99th=[36963] 00:16:13.243 bw ( KiB/s): min=16400, max=20464, per=27.03%, avg=18432.00, stdev=2873.68, samples=2 00:16:13.243 iops : min= 4100, max= 5116, avg=4608.00, stdev=718.42, samples=2 00:16:13.243 lat (msec) : 4=0.59%, 10=16.29%, 20=66.99%, 50=16.13% 00:16:13.243 cpu : usr=2.99%, sys=5.08%, ctx=501, majf=0, minf=1 00:16:13.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:13.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.243 issued rwts: total=4349,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.243 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.243 00:16:13.243 Run status group 0 (all jobs): 00:16:13.243 READ: bw=61.1MiB/s (64.1MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=61.7MiB (64.7MB), run=1003-1009msec 00:16:13.243 WRITE: bw=66.6MiB/s (69.8MB/s), 13.1MiB/s-21.8MiB/s (13.8MB/s-22.9MB/s), io=67.2MiB (70.4MB), run=1003-1009msec 00:16:13.243 00:16:13.243 Disk stats (read/write): 00:16:13.243 nvme0n1: ios=2581/2759, merge=0/0, ticks=28141/31154, in_queue=59295, util=99.10% 00:16:13.243 nvme0n2: ios=2098/2053, merge=0/0, ticks=25906/18516, in_queue=44422, util=97.53% 00:16:13.243 nvme0n3: ios=4278/4608, merge=0/0, ticks=49492/50551, in_queue=100043, util=97.17% 00:16:13.243 nvme0n4: ios=3606/3615, merge=0/0, ticks=44606/52738, in_queue=97344, util=97.12% 00:16:13.243 19:08:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:13.243 19:08:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=291243 00:16:13.243 19:08:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:13.243 19:08:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:13.243 [global] 00:16:13.243 thread=1 00:16:13.243 invalidate=1 00:16:13.243 rw=read 00:16:13.243 time_based=1 00:16:13.243 runtime=10 00:16:13.243 ioengine=libaio 00:16:13.243 direct=1 00:16:13.243 bs=4096 00:16:13.243 iodepth=1 00:16:13.243 norandommap=1 00:16:13.243 numjobs=1 00:16:13.243 00:16:13.243 [job0] 00:16:13.243 filename=/dev/nvme0n1 00:16:13.243 [job1] 00:16:13.243 filename=/dev/nvme0n2 00:16:13.243 [job2] 00:16:13.243 filename=/dev/nvme0n3 00:16:13.243 [job3] 00:16:13.243 filename=/dev/nvme0n4 00:16:13.243 Could not set queue depth (nvme0n1) 00:16:13.243 Could not set queue depth (nvme0n2) 00:16:13.243 Could not set queue depth (nvme0n3) 00:16:13.243 Could not set queue depth (nvme0n4) 00:16:13.502 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:13.502 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:13.502 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:13.502 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:13.502 fio-3.35 00:16:13.502 Starting 4 threads 00:16:16.787 19:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:16.787 19:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:16.787 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=622592, buflen=4096 00:16:16.787 fio: pid=291391, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:16.787 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:16.787 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:16.787 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=16437248, buflen=4096 00:16:16.787 fio: pid=291390, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:16.787 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=307200, buflen=4096 00:16:16.787 fio: pid=291388, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:16.787 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:16.787 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:17.045 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=327680, buflen=4096 00:16:17.045 fio: pid=291389, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:17.046 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:17.046 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:17.046 00:16:17.046 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=291388: Fri Jul 12 19:08:19 2024 00:16:17.046 read: IOPS=24, BW=97.3KiB/s (99.6kB/s)(300KiB/3084msec) 00:16:17.046 slat (usec): min=11, max=17835, avg=257.17, stdev=2043.27 00:16:17.046 clat (usec): min=479, max=42096, avg=40581.62, stdev=4707.27 00:16:17.046 lat (usec): min=552, max=59086, avg=40841.93, stdev=5162.88 00:16:17.046 clat percentiles (usec): 00:16:17.046 | 1.00th=[ 482], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:17.046 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:17.046 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:16:17.046 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:17.046 | 99.99th=[42206] 00:16:17.046 bw ( KiB/s): min= 96, max= 104, per=1.84%, avg=97.60, stdev= 3.58, samples=5 00:16:17.046 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:16:17.046 lat (usec) : 500=1.32% 00:16:17.046 lat (msec) : 50=97.37% 00:16:17.046 cpu : usr=0.10%, sys=0.00%, ctx=79, majf=0, minf=1 00:16:17.046 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.046 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.046 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.046 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.046 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=291389: Fri Jul 12 19:08:19 2024 00:16:17.046 read: IOPS=24, BW=97.6KiB/s (100.0kB/s)(320KiB/3277msec) 00:16:17.046 slat (usec): min=10, max=16772, avg=229.03, stdev=1861.20 00:16:17.046 clat (usec): min=371, max=41222, avg=40466.95, stdev=4539.86 00:16:17.046 lat (usec): min=406, max=57995, avg=40698.55, stdev=4942.21 00:16:17.046 clat percentiles (usec): 00:16:17.046 | 1.00th=[ 371], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:17.046 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:17.046 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:17.046 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:17.046 | 99.99th=[41157] 00:16:17.046 bw ( KiB/s): min= 92, max= 104, per=1.86%, avg=98.00, stdev= 4.90, samples=6 00:16:17.046 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:16:17.046 lat (usec) : 500=1.23% 00:16:17.046 lat (msec) : 50=97.53% 00:16:17.046 cpu : usr=0.12%, sys=0.00%, ctx=84, majf=0, minf=1 00:16:17.046 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.046 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.046 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.046 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.046 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=291390: Fri Jul 12 19:08:19 2024 00:16:17.046 read: IOPS=1377, BW=5507KiB/s (5639kB/s)(15.7MiB/2915msec) 00:16:17.046 slat (usec): min=7, max=15607, avg=15.00, stdev=304.11 00:16:17.046 clat (usec): min=176, max=41621, avg=704.11, stdev=4429.89 00:16:17.046 lat (usec): min=183, max=41673, avg=719.11, stdev=4440.75 00:16:17.046 clat percentiles (usec): 00:16:17.046 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:16:17.046 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 215], 00:16:17.046 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 229], 95.00th=[ 239], 00:16:17.046 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:17.046 | 99.99th=[41681] 00:16:17.046 bw ( KiB/s): min= 144, max=18080, per=80.62%, avg=4251.20, stdev=7758.37, samples=5 00:16:17.046 iops : min= 36, max= 4520, avg=1062.80, stdev=1939.59, samples=5 00:16:17.046 lat (usec) : 250=96.31%, 500=2.42%, 750=0.02% 00:16:17.046 lat (msec) : 50=1.22% 00:16:17.046 cpu : usr=0.89%, sys=2.06%, ctx=4017, majf=0, minf=1 00:16:17.046 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.046 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.046 issued rwts: total=4014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.046 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.046 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=291391: Fri Jul 12 19:08:19 2024 00:16:17.046 read: IOPS=56, BW=224KiB/s (229kB/s)(608KiB/2717msec) 00:16:17.046 slat (nsec): min=7218, max=28466, avg=12853.65, stdev=6417.85 00:16:17.046 clat (usec): min=195, max=43766, avg=17718.45, stdev=20288.88 00:16:17.046 lat (usec): min=203, max=43787, avg=17731.31, stdev=20294.20 00:16:17.046 clat percentiles (usec): 00:16:17.046 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 217], 00:16:17.046 | 30.00th=[ 221], 40.00th=[ 255], 50.00th=[ 273], 60.00th=[40633], 00:16:17.046 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:16:17.046 | 99.00th=[42206], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:16:17.046 | 99.99th=[43779] 00:16:17.046 bw ( KiB/s): min= 96, max= 768, per=4.46%, avg=235.20, stdev=297.92, samples=5 00:16:17.046 iops : min= 24, max= 192, avg=58.80, stdev=74.48, samples=5 00:16:17.046 lat (usec) : 250=37.91%, 500=18.30%, 750=0.65% 00:16:17.046 lat (msec) : 50=42.48% 00:16:17.046 cpu : usr=0.15%, sys=0.00%, ctx=153, majf=0, minf=2 00:16:17.046 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.046 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.046 issued rwts: total=153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.046 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.046 00:16:17.046 Run status group 0 (all jobs): 00:16:17.046 READ: bw=5273KiB/s (5400kB/s), 97.3KiB/s-5507KiB/s (99.6kB/s-5639kB/s), io=16.9MiB (17.7MB), run=2717-3277msec 00:16:17.046 00:16:17.046 Disk stats (read/write): 00:16:17.046 nvme0n1: ios=70/0, merge=0/0, ticks=2840/0, in_queue=2840, util=95.43% 00:16:17.046 nvme0n2: ios=109/0, merge=0/0, ticks=3862/0, in_queue=3862, util=98.95% 00:16:17.046 nvme0n3: ios=4012/0, merge=0/0, ticks=2750/0, in_queue=2750, util=95.68% 00:16:17.046 nvme0n4: ios=149/0, merge=0/0, ticks=2572/0, in_queue=2572, util=96.49% 00:16:17.304 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:17.304 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:17.304 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:17.304 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:17.561 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:17.561 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:17.819 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:17.819 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:17.819 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:18.077 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 291243 00:16:18.077 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:18.077 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:18.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.077 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:18.077 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:16:18.077 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:18.077 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.077 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:18.077 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.077 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:16:18.077 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:18.077 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:18.077 nvmf hotplug test: fio failed as expected 00:16:18.078 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:18.336 rmmod nvme_tcp 00:16:18.336 rmmod nvme_fabrics 00:16:18.336 rmmod nvme_keyring 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 288324 ']' 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 288324 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 288324 ']' 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 288324 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 288324 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 288324' 00:16:18.336 killing process with pid 288324 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 288324 00:16:18.336 19:08:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 288324 00:16:18.596 19:08:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:18.596 19:08:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:18.596 19:08:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:18.596 19:08:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:18.596 19:08:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:18.596 19:08:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.596 19:08:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.596 19:08:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.135 19:08:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:21.135 00:16:21.135 real 0m26.791s 00:16:21.135 user 1m46.477s 00:16:21.135 sys 0m7.899s 00:16:21.135 19:08:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:21.135 19:08:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.135 ************************************ 00:16:21.135 END TEST nvmf_fio_target 00:16:21.135 ************************************ 00:16:21.135 19:08:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:21.135 19:08:23 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:21.135 19:08:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:21.135 19:08:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.135 19:08:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:21.135 ************************************ 00:16:21.135 START TEST nvmf_bdevio 00:16:21.135 ************************************ 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:21.135 * Looking for test storage... 00:16:21.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:21.135 19:08:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:26.412 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:26.412 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:26.412 Found net devices under 0000:86:00.0: cvl_0_0 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:26.412 Found net devices under 0000:86:00.1: cvl_0_1 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:26.412 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:26.672 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:26.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:16:26.672 00:16:26.672 --- 10.0.0.2 ping statistics --- 00:16:26.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.672 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:16:26.672 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:26.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:16:26.672 00:16:26.672 --- 10.0.0.1 ping statistics --- 00:16:26.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.672 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:16:26.672 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.672 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:26.672 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:26.672 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.672 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:26.672 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:26.672 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.672 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:26.672 19:08:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:26.672 19:08:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:26.672 19:08:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:26.672 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:26.672 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:26.672 19:08:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=295621 00:16:26.672 19:08:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 295621 00:16:26.672 19:08:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:26.672 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 295621 ']' 00:16:26.672 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.673 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.673 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.673 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.673 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:26.673 [2024-07-12 19:08:29.081294] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:16:26.673 [2024-07-12 19:08:29.081338] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.673 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.673 [2024-07-12 19:08:29.137674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.673 [2024-07-12 19:08:29.214876] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.673 [2024-07-12 19:08:29.214915] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.673 [2024-07-12 19:08:29.214922] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.673 [2024-07-12 19:08:29.214928] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.673 [2024-07-12 19:08:29.214933] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.673 [2024-07-12 19:08:29.215047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:26.673 [2024-07-12 19:08:29.215172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:26.673 [2024-07-12 19:08:29.217241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:26.673 [2024-07-12 19:08:29.217243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:27.611 [2024-07-12 19:08:29.927081] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:27.611 Malloc0 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:27.611 [2024-07-12 19:08:29.981637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:27.611 { 00:16:27.611 "params": { 00:16:27.611 "name": "Nvme$subsystem", 00:16:27.611 "trtype": "$TEST_TRANSPORT", 00:16:27.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.611 "adrfam": "ipv4", 00:16:27.611 "trsvcid": "$NVMF_PORT", 00:16:27.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.611 "hdgst": ${hdgst:-false}, 00:16:27.611 "ddgst": ${ddgst:-false} 00:16:27.611 }, 00:16:27.611 "method": "bdev_nvme_attach_controller" 00:16:27.611 } 00:16:27.611 EOF 00:16:27.611 )") 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:27.611 19:08:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:27.611 "params": { 00:16:27.611 "name": "Nvme1", 00:16:27.611 "trtype": "tcp", 00:16:27.611 "traddr": "10.0.0.2", 00:16:27.611 "adrfam": "ipv4", 00:16:27.611 "trsvcid": "4420", 00:16:27.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:27.611 "hdgst": false, 00:16:27.611 "ddgst": false 00:16:27.611 }, 00:16:27.611 "method": "bdev_nvme_attach_controller" 00:16:27.611 }' 00:16:27.611 [2024-07-12 19:08:30.028566] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:16:27.611 [2024-07-12 19:08:30.028607] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid295869 ] 00:16:27.611 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.611 [2024-07-12 19:08:30.094410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:27.611 [2024-07-12 19:08:30.171010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.611 [2024-07-12 19:08:30.171119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.611 [2024-07-12 19:08:30.171119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.179 I/O targets: 00:16:28.179 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:28.179 00:16:28.179 00:16:28.179 CUnit - A unit testing framework for C - Version 2.1-3 00:16:28.179 http://cunit.sourceforge.net/ 00:16:28.179 00:16:28.179 00:16:28.179 Suite: bdevio tests on: Nvme1n1 00:16:28.179 Test: blockdev write read block ...passed 00:16:28.179 Test: blockdev write zeroes read block ...passed 00:16:28.179 Test: blockdev write zeroes read no split ...passed 00:16:28.179 Test: blockdev write zeroes read split ...passed 00:16:28.179 Test: blockdev write zeroes read split partial ...passed 00:16:28.179 Test: blockdev reset ...[2024-07-12 19:08:30.606665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:28.179 [2024-07-12 19:08:30.606724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e346d0 (9): Bad file descriptor 00:16:28.179 [2024-07-12 19:08:30.709850] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:28.179 passed 00:16:28.179 Test: blockdev write read 8 blocks ...passed 00:16:28.179 Test: blockdev write read size > 128k ...passed 00:16:28.179 Test: blockdev write read invalid size ...passed 00:16:28.438 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:28.438 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:28.438 Test: blockdev write read max offset ...passed 00:16:28.438 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:28.438 Test: blockdev writev readv 8 blocks ...passed 00:16:28.438 Test: blockdev writev readv 30 x 1block ...passed 00:16:28.438 Test: blockdev writev readv block ...passed 00:16:28.438 Test: blockdev writev readv size > 128k ...passed 00:16:28.438 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:28.438 Test: blockdev comparev and writev ...[2024-07-12 19:08:30.919935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.438 [2024-07-12 19:08:30.919965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:28.438 [2024-07-12 19:08:30.919983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.438 [2024-07-12 19:08:30.919995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.438 [2024-07-12 19:08:30.920243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.438 [2024-07-12 19:08:30.920256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:28.438 [2024-07-12 19:08:30.920272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.438 [2024-07-12 19:08:30.920283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:28.438 [2024-07-12 19:08:30.920520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.438 [2024-07-12 19:08:30.920532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:28.438 [2024-07-12 19:08:30.920553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.438 [2024-07-12 19:08:30.920565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:28.438 [2024-07-12 19:08:30.920812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.438 [2024-07-12 19:08:30.920823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:28.438 [2024-07-12 19:08:30.920839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.438 [2024-07-12 19:08:30.920851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:28.438 passed 00:16:28.438 Test: blockdev nvme passthru rw ...passed 00:16:28.438 Test: blockdev nvme passthru vendor specific ...[2024-07-12 19:08:31.002498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.438 [2024-07-12 19:08:31.002522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:28.438 [2024-07-12 19:08:31.002637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.438 [2024-07-12 19:08:31.002650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:28.438 [2024-07-12 19:08:31.002766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.438 [2024-07-12 19:08:31.002779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:28.438 [2024-07-12 19:08:31.002892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.438 [2024-07-12 19:08:31.002903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:28.438 passed 00:16:28.698 Test: blockdev nvme admin passthru ...passed 00:16:28.698 Test: blockdev copy ...passed 00:16:28.698 00:16:28.698 Run Summary: Type Total Ran Passed Failed Inactive 00:16:28.698 suites 1 1 n/a 0 0 00:16:28.698 tests 23 23 23 0 0 00:16:28.698 asserts 152 152 152 0 n/a 00:16:28.698 00:16:28.698 Elapsed time = 1.225 seconds 00:16:28.698 19:08:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.698 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.698 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:28.698 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.698 19:08:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:28.698 19:08:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:28.698 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:28.698 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:28.698 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:28.698 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:28.698 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.698 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:28.698 rmmod nvme_tcp 00:16:28.698 rmmod nvme_fabrics 00:16:28.958 rmmod nvme_keyring 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 295621 ']' 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 295621 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 295621 ']' 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 295621 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 295621 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 295621' 00:16:28.958 killing process with pid 295621 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 295621 00:16:28.958 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 295621 00:16:29.218 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:29.218 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:29.218 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:29.218 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.218 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:29.218 19:08:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.218 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.218 19:08:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.125 19:08:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:31.125 00:16:31.125 real 0m10.461s 00:16:31.125 user 0m13.359s 00:16:31.125 sys 0m4.856s 00:16:31.125 19:08:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:31.125 19:08:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:31.125 ************************************ 00:16:31.125 END TEST nvmf_bdevio 00:16:31.125 ************************************ 00:16:31.125 19:08:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:31.125 19:08:33 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:31.125 19:08:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:31.125 19:08:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:31.125 19:08:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:31.389 ************************************ 00:16:31.389 START TEST nvmf_auth_target 00:16:31.389 ************************************ 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:31.389 * Looking for test storage... 00:16:31.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:31.389 19:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:37.970 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:37.970 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:37.970 Found net devices under 0000:86:00.0: cvl_0_0 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:37.970 Found net devices under 0000:86:00.1: cvl_0_1 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:37.970 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:37.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:16:37.971 00:16:37.971 --- 10.0.0.2 ping statistics --- 00:16:37.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.971 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:37.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:16:37.971 00:16:37.971 --- 10.0.0.1 ping statistics --- 00:16:37.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.971 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=299549 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 299549 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 299549 ']' 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.971 19:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=299649 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f16b642253355766eba255ad6fb6b3c0c20045d39a1c9a03 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DW4 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f16b642253355766eba255ad6fb6b3c0c20045d39a1c9a03 0 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f16b642253355766eba255ad6fb6b3c0c20045d39a1c9a03 0 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f16b642253355766eba255ad6fb6b3c0c20045d39a1c9a03 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DW4 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DW4 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.DW4 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0e6f82806358ca573505a515ee5bf0a6f5c4310289c1fc7fa5f022b5dc3f43f7 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.x61 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0e6f82806358ca573505a515ee5bf0a6f5c4310289c1fc7fa5f022b5dc3f43f7 3 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0e6f82806358ca573505a515ee5bf0a6f5c4310289c1fc7fa5f022b5dc3f43f7 3 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0e6f82806358ca573505a515ee5bf0a6f5c4310289c1fc7fa5f022b5dc3f43f7 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:37.971 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.x61 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.x61 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.x61 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a18e81a00606d60f8e84ed71d869350f 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.r5n 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a18e81a00606d60f8e84ed71d869350f 1 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a18e81a00606d60f8e84ed71d869350f 1 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a18e81a00606d60f8e84ed71d869350f 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.r5n 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.r5n 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.r5n 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=adfa96f41ea9fb8dc032b6535832c15e3775a97072aad493 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.aAb 00:16:38.231 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key adfa96f41ea9fb8dc032b6535832c15e3775a97072aad493 2 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 adfa96f41ea9fb8dc032b6535832c15e3775a97072aad493 2 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=adfa96f41ea9fb8dc032b6535832c15e3775a97072aad493 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.aAb 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.aAb 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.aAb 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a2d3c58061e8c8f80748df944b55139bd82e20424ed2615a 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.53K 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a2d3c58061e8c8f80748df944b55139bd82e20424ed2615a 2 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a2d3c58061e8c8f80748df944b55139bd82e20424ed2615a 2 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a2d3c58061e8c8f80748df944b55139bd82e20424ed2615a 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.53K 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.53K 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.53K 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=832bb9b7caa5e992ab465429e67b9361 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.759 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 832bb9b7caa5e992ab465429e67b9361 1 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 832bb9b7caa5e992ab465429e67b9361 1 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=832bb9b7caa5e992ab465429e67b9361 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.759 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.759 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.759 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=62b9758c6cdde0a00f92b482fb1de7293763a0d4892cc77380e5e99a925656c8 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.l4w 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 62b9758c6cdde0a00f92b482fb1de7293763a0d4892cc77380e5e99a925656c8 3 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 62b9758c6cdde0a00f92b482fb1de7293763a0d4892cc77380e5e99a925656c8 3 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=62b9758c6cdde0a00f92b482fb1de7293763a0d4892cc77380e5e99a925656c8 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:38.232 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:38.491 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.l4w 00:16:38.491 19:08:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.l4w 00:16:38.491 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.l4w 00:16:38.491 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:38.491 19:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 299549 00:16:38.491 19:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 299549 ']' 00:16:38.491 19:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.491 19:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.491 19:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.491 19:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.491 19:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.491 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.491 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:38.491 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 299649 /var/tmp/host.sock 00:16:38.491 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 299649 ']' 00:16:38.491 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:38.491 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.491 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:38.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:38.491 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.491 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.751 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.751 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:38.751 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:38.751 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.751 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.751 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.751 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:38.751 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DW4 00:16:38.751 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.751 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.751 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.751 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DW4 00:16:38.751 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DW4 00:16:39.010 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.x61 ]] 00:16:39.010 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x61 00:16:39.010 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.010 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.010 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.010 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x61 00:16:39.010 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x61 00:16:39.270 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:39.270 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.r5n 00:16:39.270 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.270 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.270 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.270 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.r5n 00:16:39.270 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.r5n 00:16:39.270 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.aAb ]] 00:16:39.270 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aAb 00:16:39.270 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.270 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.270 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.270 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aAb 00:16:39.270 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aAb 00:16:39.530 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:39.530 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.53K 00:16:39.530 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.530 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.530 19:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.530 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.53K 00:16:39.530 19:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.53K 00:16:39.789 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.759 ]] 00:16:39.790 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.759 00:16:39.790 19:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.790 19:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.790 19:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.790 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.759 00:16:39.790 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.759 00:16:40.049 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:40.049 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.l4w 00:16:40.049 19:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.049 19:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.049 19:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.049 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.l4w 00:16:40.049 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.l4w 00:16:40.049 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:40.049 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:40.049 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.049 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.049 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.049 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.309 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:40.309 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.309 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.309 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:40.309 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.309 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.309 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.309 19:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.309 19:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.309 19:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.309 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.309 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.568 00:16:40.568 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.568 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.568 19:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.827 19:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.827 19:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.827 19:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.827 19:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.827 19:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.827 19:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.827 { 00:16:40.827 "cntlid": 1, 00:16:40.827 "qid": 0, 00:16:40.827 "state": "enabled", 00:16:40.827 "thread": "nvmf_tgt_poll_group_000", 00:16:40.827 "listen_address": { 00:16:40.827 "trtype": "TCP", 00:16:40.827 "adrfam": "IPv4", 00:16:40.827 "traddr": "10.0.0.2", 00:16:40.827 "trsvcid": "4420" 00:16:40.827 }, 00:16:40.827 "peer_address": { 00:16:40.827 "trtype": "TCP", 00:16:40.827 "adrfam": "IPv4", 00:16:40.828 "traddr": "10.0.0.1", 00:16:40.828 "trsvcid": "35908" 00:16:40.828 }, 00:16:40.828 "auth": { 00:16:40.828 "state": "completed", 00:16:40.828 "digest": "sha256", 00:16:40.828 "dhgroup": "null" 00:16:40.828 } 00:16:40.828 } 00:16:40.828 ]' 00:16:40.828 19:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.828 19:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.828 19:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.828 19:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:40.828 19:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.828 19:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.828 19:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.828 19:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.087 19:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:16:44.378 19:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.378 19:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.378 19:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.378 19:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.378 19:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.378 19:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.378 19:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:44.378 19:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:44.638 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:44.638 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.638 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.638 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:44.638 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:44.638 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.638 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.638 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.638 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.638 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.638 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.638 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.898 00:16:44.898 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.898 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.898 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.898 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.898 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.898 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.898 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.898 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.898 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.898 { 00:16:44.898 "cntlid": 3, 00:16:44.898 "qid": 0, 00:16:44.898 "state": "enabled", 00:16:44.898 "thread": "nvmf_tgt_poll_group_000", 00:16:44.898 "listen_address": { 00:16:44.898 "trtype": "TCP", 00:16:44.898 "adrfam": "IPv4", 00:16:44.898 "traddr": "10.0.0.2", 00:16:44.898 "trsvcid": "4420" 00:16:44.898 }, 00:16:44.898 "peer_address": { 00:16:44.898 "trtype": "TCP", 00:16:44.898 "adrfam": "IPv4", 00:16:44.898 "traddr": "10.0.0.1", 00:16:44.898 "trsvcid": "41618" 00:16:44.898 }, 00:16:44.898 "auth": { 00:16:44.898 "state": "completed", 00:16:44.898 "digest": "sha256", 00:16:44.898 "dhgroup": "null" 00:16:44.898 } 00:16:44.898 } 00:16:44.898 ]' 00:16:44.898 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.158 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.158 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.158 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:45.158 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.158 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.158 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.158 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.417 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.986 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.246 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.246 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.246 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.246 00:16:46.246 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.246 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.246 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.505 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.505 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.505 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.505 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.505 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.505 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.505 { 00:16:46.505 "cntlid": 5, 00:16:46.505 "qid": 0, 00:16:46.505 "state": "enabled", 00:16:46.505 "thread": "nvmf_tgt_poll_group_000", 00:16:46.505 "listen_address": { 00:16:46.505 "trtype": "TCP", 00:16:46.505 "adrfam": "IPv4", 00:16:46.505 "traddr": "10.0.0.2", 00:16:46.505 "trsvcid": "4420" 00:16:46.505 }, 00:16:46.505 "peer_address": { 00:16:46.505 "trtype": "TCP", 00:16:46.505 "adrfam": "IPv4", 00:16:46.505 "traddr": "10.0.0.1", 00:16:46.505 "trsvcid": "41642" 00:16:46.505 }, 00:16:46.505 "auth": { 00:16:46.505 "state": "completed", 00:16:46.505 "digest": "sha256", 00:16:46.505 "dhgroup": "null" 00:16:46.505 } 00:16:46.505 } 00:16:46.505 ]' 00:16:46.505 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.505 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.505 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.764 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:46.764 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.764 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.764 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.764 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.764 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:16:47.333 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.592 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.592 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.592 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.592 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.592 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.592 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.592 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.592 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:47.592 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.592 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.592 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:47.592 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:47.592 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.592 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:47.592 19:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.592 19:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.592 19:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.592 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.592 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.851 00:16:47.851 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.851 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.851 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.110 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.110 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.110 19:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.110 19:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.110 19:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.110 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.110 { 00:16:48.110 "cntlid": 7, 00:16:48.110 "qid": 0, 00:16:48.110 "state": "enabled", 00:16:48.110 "thread": "nvmf_tgt_poll_group_000", 00:16:48.110 "listen_address": { 00:16:48.110 "trtype": "TCP", 00:16:48.110 "adrfam": "IPv4", 00:16:48.110 "traddr": "10.0.0.2", 00:16:48.110 "trsvcid": "4420" 00:16:48.110 }, 00:16:48.110 "peer_address": { 00:16:48.110 "trtype": "TCP", 00:16:48.110 "adrfam": "IPv4", 00:16:48.110 "traddr": "10.0.0.1", 00:16:48.110 "trsvcid": "41672" 00:16:48.110 }, 00:16:48.110 "auth": { 00:16:48.110 "state": "completed", 00:16:48.110 "digest": "sha256", 00:16:48.110 "dhgroup": "null" 00:16:48.110 } 00:16:48.110 } 00:16:48.110 ]' 00:16:48.110 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.110 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.110 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.110 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:48.110 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.369 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.369 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.369 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.369 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:16:48.938 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.938 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.938 19:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.938 19:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.938 19:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.938 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.938 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.938 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.938 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:49.197 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:49.197 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.197 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.197 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:49.197 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:49.197 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.197 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.197 19:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.197 19:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.197 19:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.197 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.197 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.456 00:16:49.456 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.456 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.456 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.715 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.715 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.715 19:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.715 19:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.715 19:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.715 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.715 { 00:16:49.715 "cntlid": 9, 00:16:49.715 "qid": 0, 00:16:49.715 "state": "enabled", 00:16:49.715 "thread": "nvmf_tgt_poll_group_000", 00:16:49.715 "listen_address": { 00:16:49.715 "trtype": "TCP", 00:16:49.715 "adrfam": "IPv4", 00:16:49.715 "traddr": "10.0.0.2", 00:16:49.715 "trsvcid": "4420" 00:16:49.715 }, 00:16:49.715 "peer_address": { 00:16:49.715 "trtype": "TCP", 00:16:49.715 "adrfam": "IPv4", 00:16:49.715 "traddr": "10.0.0.1", 00:16:49.715 "trsvcid": "41706" 00:16:49.716 }, 00:16:49.716 "auth": { 00:16:49.716 "state": "completed", 00:16:49.716 "digest": "sha256", 00:16:49.716 "dhgroup": "ffdhe2048" 00:16:49.716 } 00:16:49.716 } 00:16:49.716 ]' 00:16:49.716 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.716 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.716 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.716 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:49.716 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.716 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.716 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.716 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.975 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:16:50.541 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.541 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.541 19:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.541 19:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.541 19:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.541 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.541 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.541 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.799 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:50.799 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.799 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.799 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:50.799 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:50.799 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.799 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.799 19:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.799 19:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.799 19:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.799 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.799 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.058 00:16:51.058 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.058 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.058 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.317 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.317 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.317 19:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.317 19:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.317 19:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.317 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.317 { 00:16:51.317 "cntlid": 11, 00:16:51.317 "qid": 0, 00:16:51.317 "state": "enabled", 00:16:51.317 "thread": "nvmf_tgt_poll_group_000", 00:16:51.317 "listen_address": { 00:16:51.317 "trtype": "TCP", 00:16:51.317 "adrfam": "IPv4", 00:16:51.317 "traddr": "10.0.0.2", 00:16:51.317 "trsvcid": "4420" 00:16:51.317 }, 00:16:51.317 "peer_address": { 00:16:51.317 "trtype": "TCP", 00:16:51.317 "adrfam": "IPv4", 00:16:51.317 "traddr": "10.0.0.1", 00:16:51.317 "trsvcid": "41744" 00:16:51.317 }, 00:16:51.317 "auth": { 00:16:51.317 "state": "completed", 00:16:51.317 "digest": "sha256", 00:16:51.317 "dhgroup": "ffdhe2048" 00:16:51.317 } 00:16:51.317 } 00:16:51.317 ]' 00:16:51.317 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.317 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.317 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.317 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.317 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.317 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.317 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.317 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.576 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:16:52.144 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.144 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.144 19:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.144 19:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.144 19:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.144 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.144 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:52.144 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:52.404 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:52.404 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.404 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.404 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:52.404 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:52.404 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.404 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.404 19:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.404 19:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.404 19:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.404 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.404 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.663 00:16:52.663 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.663 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.663 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.663 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.663 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.663 19:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.663 19:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.663 19:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.663 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.663 { 00:16:52.663 "cntlid": 13, 00:16:52.663 "qid": 0, 00:16:52.663 "state": "enabled", 00:16:52.663 "thread": "nvmf_tgt_poll_group_000", 00:16:52.663 "listen_address": { 00:16:52.663 "trtype": "TCP", 00:16:52.663 "adrfam": "IPv4", 00:16:52.663 "traddr": "10.0.0.2", 00:16:52.663 "trsvcid": "4420" 00:16:52.663 }, 00:16:52.663 "peer_address": { 00:16:52.663 "trtype": "TCP", 00:16:52.663 "adrfam": "IPv4", 00:16:52.663 "traddr": "10.0.0.1", 00:16:52.663 "trsvcid": "41770" 00:16:52.663 }, 00:16:52.663 "auth": { 00:16:52.663 "state": "completed", 00:16:52.664 "digest": "sha256", 00:16:52.664 "dhgroup": "ffdhe2048" 00:16:52.664 } 00:16:52.664 } 00:16:52.664 ]' 00:16:52.664 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.923 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.923 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.923 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:52.923 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.923 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.923 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.923 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.183 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.753 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.013 00:16:54.013 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.013 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.013 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.274 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.274 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.274 19:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.274 19:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.274 19:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.274 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.274 { 00:16:54.274 "cntlid": 15, 00:16:54.274 "qid": 0, 00:16:54.274 "state": "enabled", 00:16:54.274 "thread": "nvmf_tgt_poll_group_000", 00:16:54.274 "listen_address": { 00:16:54.274 "trtype": "TCP", 00:16:54.274 "adrfam": "IPv4", 00:16:54.274 "traddr": "10.0.0.2", 00:16:54.274 "trsvcid": "4420" 00:16:54.274 }, 00:16:54.274 "peer_address": { 00:16:54.274 "trtype": "TCP", 00:16:54.274 "adrfam": "IPv4", 00:16:54.274 "traddr": "10.0.0.1", 00:16:54.274 "trsvcid": "41788" 00:16:54.274 }, 00:16:54.274 "auth": { 00:16:54.274 "state": "completed", 00:16:54.274 "digest": "sha256", 00:16:54.274 "dhgroup": "ffdhe2048" 00:16:54.274 } 00:16:54.274 } 00:16:54.274 ]' 00:16:54.274 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.274 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.274 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.274 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:54.274 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.274 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.274 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.274 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.534 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:16:55.104 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.104 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.104 19:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.104 19:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.104 19:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.104 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.104 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.104 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.104 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.364 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:55.364 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.364 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.364 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:55.364 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:55.364 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.364 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.364 19:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.364 19:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.364 19:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.364 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.364 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.624 00:16:55.624 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.624 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.624 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.885 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.885 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.885 19:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.885 19:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.885 19:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.885 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.885 { 00:16:55.885 "cntlid": 17, 00:16:55.885 "qid": 0, 00:16:55.885 "state": "enabled", 00:16:55.885 "thread": "nvmf_tgt_poll_group_000", 00:16:55.885 "listen_address": { 00:16:55.885 "trtype": "TCP", 00:16:55.885 "adrfam": "IPv4", 00:16:55.885 "traddr": "10.0.0.2", 00:16:55.885 "trsvcid": "4420" 00:16:55.885 }, 00:16:55.885 "peer_address": { 00:16:55.885 "trtype": "TCP", 00:16:55.885 "adrfam": "IPv4", 00:16:55.885 "traddr": "10.0.0.1", 00:16:55.885 "trsvcid": "48640" 00:16:55.885 }, 00:16:55.885 "auth": { 00:16:55.885 "state": "completed", 00:16:55.885 "digest": "sha256", 00:16:55.885 "dhgroup": "ffdhe3072" 00:16:55.885 } 00:16:55.885 } 00:16:55.885 ]' 00:16:55.885 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.885 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.885 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.885 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:55.885 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.885 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.885 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.885 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.147 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:16:56.717 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.717 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.717 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.717 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.717 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.718 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.718 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:56.718 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:56.718 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:56.718 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.718 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:56.718 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:56.718 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:56.978 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.978 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.978 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.978 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.978 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.978 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.978 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.978 00:16:56.978 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.978 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.978 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.238 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.238 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.238 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.238 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.238 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.238 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.238 { 00:16:57.238 "cntlid": 19, 00:16:57.238 "qid": 0, 00:16:57.238 "state": "enabled", 00:16:57.238 "thread": "nvmf_tgt_poll_group_000", 00:16:57.238 "listen_address": { 00:16:57.238 "trtype": "TCP", 00:16:57.238 "adrfam": "IPv4", 00:16:57.238 "traddr": "10.0.0.2", 00:16:57.238 "trsvcid": "4420" 00:16:57.238 }, 00:16:57.238 "peer_address": { 00:16:57.238 "trtype": "TCP", 00:16:57.238 "adrfam": "IPv4", 00:16:57.238 "traddr": "10.0.0.1", 00:16:57.238 "trsvcid": "48670" 00:16:57.238 }, 00:16:57.238 "auth": { 00:16:57.238 "state": "completed", 00:16:57.238 "digest": "sha256", 00:16:57.238 "dhgroup": "ffdhe3072" 00:16:57.238 } 00:16:57.238 } 00:16:57.238 ]' 00:16:57.238 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.238 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.238 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.499 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:57.499 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.499 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.499 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.499 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.499 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:16:58.069 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.069 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.069 19:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.069 19:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.069 19:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.069 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.069 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:58.069 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:58.330 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:58.330 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.330 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.330 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:58.330 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:58.330 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.330 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.330 19:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.330 19:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.330 19:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.330 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.330 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.591 00:16:58.591 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.591 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.591 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.851 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.851 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.851 19:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.851 19:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.851 19:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.851 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.851 { 00:16:58.851 "cntlid": 21, 00:16:58.851 "qid": 0, 00:16:58.851 "state": "enabled", 00:16:58.851 "thread": "nvmf_tgt_poll_group_000", 00:16:58.851 "listen_address": { 00:16:58.851 "trtype": "TCP", 00:16:58.851 "adrfam": "IPv4", 00:16:58.851 "traddr": "10.0.0.2", 00:16:58.851 "trsvcid": "4420" 00:16:58.851 }, 00:16:58.851 "peer_address": { 00:16:58.851 "trtype": "TCP", 00:16:58.851 "adrfam": "IPv4", 00:16:58.851 "traddr": "10.0.0.1", 00:16:58.851 "trsvcid": "48692" 00:16:58.851 }, 00:16:58.851 "auth": { 00:16:58.851 "state": "completed", 00:16:58.851 "digest": "sha256", 00:16:58.851 "dhgroup": "ffdhe3072" 00:16:58.851 } 00:16:58.851 } 00:16:58.851 ]' 00:16:58.851 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.851 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.851 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.851 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:58.851 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.851 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.851 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.852 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.111 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:16:59.679 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.679 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.679 19:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.679 19:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.679 19:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.679 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.679 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:59.680 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:59.937 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:59.937 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.937 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.937 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:59.937 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:59.937 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.937 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:59.937 19:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.937 19:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.937 19:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.937 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.937 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.195 00:17:00.195 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.195 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.195 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.454 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.454 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.454 19:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.454 19:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.454 19:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.454 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.454 { 00:17:00.454 "cntlid": 23, 00:17:00.454 "qid": 0, 00:17:00.454 "state": "enabled", 00:17:00.454 "thread": "nvmf_tgt_poll_group_000", 00:17:00.454 "listen_address": { 00:17:00.454 "trtype": "TCP", 00:17:00.454 "adrfam": "IPv4", 00:17:00.454 "traddr": "10.0.0.2", 00:17:00.454 "trsvcid": "4420" 00:17:00.454 }, 00:17:00.454 "peer_address": { 00:17:00.454 "trtype": "TCP", 00:17:00.454 "adrfam": "IPv4", 00:17:00.454 "traddr": "10.0.0.1", 00:17:00.454 "trsvcid": "48706" 00:17:00.454 }, 00:17:00.454 "auth": { 00:17:00.454 "state": "completed", 00:17:00.454 "digest": "sha256", 00:17:00.454 "dhgroup": "ffdhe3072" 00:17:00.454 } 00:17:00.455 } 00:17:00.455 ]' 00:17:00.455 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.455 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.455 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.455 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:00.455 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.455 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.455 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.455 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.714 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:17:01.283 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.283 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.283 19:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.283 19:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 19:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.283 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.283 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.283 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.283 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.543 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:01.543 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.543 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.543 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:01.543 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:01.543 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.543 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.543 19:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.543 19:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.543 19:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.543 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.543 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.804 00:17:01.804 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.804 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.804 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.804 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.804 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.804 19:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.804 19:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.064 19:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.064 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.064 { 00:17:02.064 "cntlid": 25, 00:17:02.064 "qid": 0, 00:17:02.064 "state": "enabled", 00:17:02.064 "thread": "nvmf_tgt_poll_group_000", 00:17:02.064 "listen_address": { 00:17:02.064 "trtype": "TCP", 00:17:02.064 "adrfam": "IPv4", 00:17:02.064 "traddr": "10.0.0.2", 00:17:02.064 "trsvcid": "4420" 00:17:02.064 }, 00:17:02.064 "peer_address": { 00:17:02.064 "trtype": "TCP", 00:17:02.064 "adrfam": "IPv4", 00:17:02.064 "traddr": "10.0.0.1", 00:17:02.064 "trsvcid": "48732" 00:17:02.064 }, 00:17:02.064 "auth": { 00:17:02.064 "state": "completed", 00:17:02.064 "digest": "sha256", 00:17:02.064 "dhgroup": "ffdhe4096" 00:17:02.064 } 00:17:02.064 } 00:17:02.064 ]' 00:17:02.064 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.064 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.064 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.064 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.064 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.064 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.064 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.064 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.325 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.894 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.154 00:17:03.154 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.154 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.154 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.413 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.413 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.414 19:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.414 19:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.414 19:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.414 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.414 { 00:17:03.414 "cntlid": 27, 00:17:03.414 "qid": 0, 00:17:03.414 "state": "enabled", 00:17:03.414 "thread": "nvmf_tgt_poll_group_000", 00:17:03.414 "listen_address": { 00:17:03.414 "trtype": "TCP", 00:17:03.414 "adrfam": "IPv4", 00:17:03.414 "traddr": "10.0.0.2", 00:17:03.414 "trsvcid": "4420" 00:17:03.414 }, 00:17:03.414 "peer_address": { 00:17:03.414 "trtype": "TCP", 00:17:03.414 "adrfam": "IPv4", 00:17:03.414 "traddr": "10.0.0.1", 00:17:03.414 "trsvcid": "48768" 00:17:03.414 }, 00:17:03.414 "auth": { 00:17:03.414 "state": "completed", 00:17:03.414 "digest": "sha256", 00:17:03.414 "dhgroup": "ffdhe4096" 00:17:03.414 } 00:17:03.414 } 00:17:03.414 ]' 00:17:03.414 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.414 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.414 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.673 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:03.673 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.673 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.673 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.673 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.673 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:17:04.242 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.502 19:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.502 19:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.502 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.502 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.762 00:17:04.762 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.762 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.762 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.021 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.021 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.021 19:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.021 19:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.021 19:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.021 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.021 { 00:17:05.021 "cntlid": 29, 00:17:05.021 "qid": 0, 00:17:05.021 "state": "enabled", 00:17:05.021 "thread": "nvmf_tgt_poll_group_000", 00:17:05.021 "listen_address": { 00:17:05.021 "trtype": "TCP", 00:17:05.021 "adrfam": "IPv4", 00:17:05.021 "traddr": "10.0.0.2", 00:17:05.021 "trsvcid": "4420" 00:17:05.021 }, 00:17:05.021 "peer_address": { 00:17:05.021 "trtype": "TCP", 00:17:05.021 "adrfam": "IPv4", 00:17:05.021 "traddr": "10.0.0.1", 00:17:05.021 "trsvcid": "45210" 00:17:05.021 }, 00:17:05.021 "auth": { 00:17:05.021 "state": "completed", 00:17:05.021 "digest": "sha256", 00:17:05.021 "dhgroup": "ffdhe4096" 00:17:05.021 } 00:17:05.021 } 00:17:05.021 ]' 00:17:05.021 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.021 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.021 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.021 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:05.021 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.281 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.281 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.281 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.281 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:17:05.851 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.851 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.851 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.851 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.851 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.851 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.851 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:05.851 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:06.111 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:06.111 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.111 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:06.111 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:06.111 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:06.111 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.111 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:06.111 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.111 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.111 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.111 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.111 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.371 00:17:06.371 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.371 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.371 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.631 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.631 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.631 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.631 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.631 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.631 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.631 { 00:17:06.631 "cntlid": 31, 00:17:06.631 "qid": 0, 00:17:06.631 "state": "enabled", 00:17:06.631 "thread": "nvmf_tgt_poll_group_000", 00:17:06.631 "listen_address": { 00:17:06.631 "trtype": "TCP", 00:17:06.631 "adrfam": "IPv4", 00:17:06.631 "traddr": "10.0.0.2", 00:17:06.631 "trsvcid": "4420" 00:17:06.631 }, 00:17:06.631 "peer_address": { 00:17:06.631 "trtype": "TCP", 00:17:06.631 "adrfam": "IPv4", 00:17:06.631 "traddr": "10.0.0.1", 00:17:06.631 "trsvcid": "45238" 00:17:06.631 }, 00:17:06.631 "auth": { 00:17:06.631 "state": "completed", 00:17:06.631 "digest": "sha256", 00:17:06.631 "dhgroup": "ffdhe4096" 00:17:06.631 } 00:17:06.631 } 00:17:06.631 ]' 00:17:06.631 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.631 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.631 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.631 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:06.631 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.631 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.631 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.631 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.890 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:17:07.460 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.460 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.460 19:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.460 19:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.460 19:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.460 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.460 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.460 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:07.460 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:07.720 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:07.720 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.720 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:07.720 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:07.720 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:07.720 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.720 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.720 19:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.720 19:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.720 19:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.720 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.720 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.979 00:17:07.979 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.979 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.979 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.238 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.238 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.238 19:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.238 19:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.238 19:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.238 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.238 { 00:17:08.238 "cntlid": 33, 00:17:08.238 "qid": 0, 00:17:08.238 "state": "enabled", 00:17:08.238 "thread": "nvmf_tgt_poll_group_000", 00:17:08.238 "listen_address": { 00:17:08.238 "trtype": "TCP", 00:17:08.238 "adrfam": "IPv4", 00:17:08.238 "traddr": "10.0.0.2", 00:17:08.238 "trsvcid": "4420" 00:17:08.238 }, 00:17:08.238 "peer_address": { 00:17:08.238 "trtype": "TCP", 00:17:08.238 "adrfam": "IPv4", 00:17:08.238 "traddr": "10.0.0.1", 00:17:08.238 "trsvcid": "45266" 00:17:08.238 }, 00:17:08.238 "auth": { 00:17:08.238 "state": "completed", 00:17:08.238 "digest": "sha256", 00:17:08.238 "dhgroup": "ffdhe6144" 00:17:08.238 } 00:17:08.238 } 00:17:08.238 ]' 00:17:08.238 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.238 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.238 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.238 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:08.238 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.238 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.238 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.238 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.498 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:17:09.066 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.066 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.066 19:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.066 19:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.066 19:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.066 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.066 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:09.066 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:09.325 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:09.325 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.325 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:09.325 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:09.325 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:09.325 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.326 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.326 19:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.326 19:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.326 19:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.326 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.326 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.584 00:17:09.584 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.584 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.584 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.964 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.964 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.964 19:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.964 19:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.964 19:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.964 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.964 { 00:17:09.964 "cntlid": 35, 00:17:09.964 "qid": 0, 00:17:09.964 "state": "enabled", 00:17:09.964 "thread": "nvmf_tgt_poll_group_000", 00:17:09.964 "listen_address": { 00:17:09.964 "trtype": "TCP", 00:17:09.964 "adrfam": "IPv4", 00:17:09.964 "traddr": "10.0.0.2", 00:17:09.964 "trsvcid": "4420" 00:17:09.964 }, 00:17:09.964 "peer_address": { 00:17:09.964 "trtype": "TCP", 00:17:09.964 "adrfam": "IPv4", 00:17:09.964 "traddr": "10.0.0.1", 00:17:09.964 "trsvcid": "45288" 00:17:09.964 }, 00:17:09.964 "auth": { 00:17:09.964 "state": "completed", 00:17:09.964 "digest": "sha256", 00:17:09.964 "dhgroup": "ffdhe6144" 00:17:09.964 } 00:17:09.964 } 00:17:09.964 ]' 00:17:09.964 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.964 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.964 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.964 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.964 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.964 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.964 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.964 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.243 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.867 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.135 00:17:11.135 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.135 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.135 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.413 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.413 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.413 19:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.413 19:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.413 19:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.413 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.413 { 00:17:11.413 "cntlid": 37, 00:17:11.413 "qid": 0, 00:17:11.413 "state": "enabled", 00:17:11.413 "thread": "nvmf_tgt_poll_group_000", 00:17:11.413 "listen_address": { 00:17:11.413 "trtype": "TCP", 00:17:11.413 "adrfam": "IPv4", 00:17:11.413 "traddr": "10.0.0.2", 00:17:11.413 "trsvcid": "4420" 00:17:11.413 }, 00:17:11.413 "peer_address": { 00:17:11.413 "trtype": "TCP", 00:17:11.413 "adrfam": "IPv4", 00:17:11.413 "traddr": "10.0.0.1", 00:17:11.413 "trsvcid": "45328" 00:17:11.413 }, 00:17:11.413 "auth": { 00:17:11.413 "state": "completed", 00:17:11.413 "digest": "sha256", 00:17:11.413 "dhgroup": "ffdhe6144" 00:17:11.413 } 00:17:11.413 } 00:17:11.413 ]' 00:17:11.413 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.413 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.413 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.413 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.413 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.686 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.686 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.686 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.686 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:17:12.274 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.274 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.274 19:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.274 19:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.274 19:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.274 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.274 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:12.274 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:12.549 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:12.549 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.549 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:12.549 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:12.549 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:12.549 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.549 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:12.549 19:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.549 19:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.549 19:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.549 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.549 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.849 00:17:12.849 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.849 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.849 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.125 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.125 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.125 19:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.125 19:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.125 19:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.125 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.125 { 00:17:13.125 "cntlid": 39, 00:17:13.125 "qid": 0, 00:17:13.125 "state": "enabled", 00:17:13.125 "thread": "nvmf_tgt_poll_group_000", 00:17:13.125 "listen_address": { 00:17:13.125 "trtype": "TCP", 00:17:13.125 "adrfam": "IPv4", 00:17:13.125 "traddr": "10.0.0.2", 00:17:13.125 "trsvcid": "4420" 00:17:13.125 }, 00:17:13.125 "peer_address": { 00:17:13.125 "trtype": "TCP", 00:17:13.125 "adrfam": "IPv4", 00:17:13.125 "traddr": "10.0.0.1", 00:17:13.125 "trsvcid": "45350" 00:17:13.125 }, 00:17:13.125 "auth": { 00:17:13.125 "state": "completed", 00:17:13.125 "digest": "sha256", 00:17:13.125 "dhgroup": "ffdhe6144" 00:17:13.125 } 00:17:13.125 } 00:17:13.125 ]' 00:17:13.125 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.125 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.125 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.125 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:13.125 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.125 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.125 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.125 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.397 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:17:14.026 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.026 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.026 19:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.026 19:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.026 19:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.026 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.026 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.026 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.026 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.286 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:14.286 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.286 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:14.286 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:14.286 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:14.286 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.286 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.286 19:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.286 19:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.286 19:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.286 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.286 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.546 00:17:14.546 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.546 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.546 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.806 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.806 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.806 19:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.806 19:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.806 19:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.806 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.806 { 00:17:14.806 "cntlid": 41, 00:17:14.806 "qid": 0, 00:17:14.806 "state": "enabled", 00:17:14.806 "thread": "nvmf_tgt_poll_group_000", 00:17:14.806 "listen_address": { 00:17:14.806 "trtype": "TCP", 00:17:14.806 "adrfam": "IPv4", 00:17:14.806 "traddr": "10.0.0.2", 00:17:14.806 "trsvcid": "4420" 00:17:14.806 }, 00:17:14.806 "peer_address": { 00:17:14.806 "trtype": "TCP", 00:17:14.806 "adrfam": "IPv4", 00:17:14.806 "traddr": "10.0.0.1", 00:17:14.806 "trsvcid": "36864" 00:17:14.806 }, 00:17:14.806 "auth": { 00:17:14.806 "state": "completed", 00:17:14.806 "digest": "sha256", 00:17:14.806 "dhgroup": "ffdhe8192" 00:17:14.806 } 00:17:14.806 } 00:17:14.806 ]' 00:17:14.806 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.806 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.806 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.065 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:15.065 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.065 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.065 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.065 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.065 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:17:15.634 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.634 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.634 19:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.634 19:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.634 19:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.634 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.634 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.634 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.894 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:15.894 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.894 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.894 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:15.894 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.894 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.894 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.894 19:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.894 19:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.894 19:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.894 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.894 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.464 00:17:16.464 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.464 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.464 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.724 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.724 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.724 19:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.724 19:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.724 19:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.724 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.724 { 00:17:16.724 "cntlid": 43, 00:17:16.724 "qid": 0, 00:17:16.724 "state": "enabled", 00:17:16.724 "thread": "nvmf_tgt_poll_group_000", 00:17:16.724 "listen_address": { 00:17:16.724 "trtype": "TCP", 00:17:16.724 "adrfam": "IPv4", 00:17:16.724 "traddr": "10.0.0.2", 00:17:16.724 "trsvcid": "4420" 00:17:16.724 }, 00:17:16.724 "peer_address": { 00:17:16.724 "trtype": "TCP", 00:17:16.724 "adrfam": "IPv4", 00:17:16.724 "traddr": "10.0.0.1", 00:17:16.724 "trsvcid": "36884" 00:17:16.724 }, 00:17:16.724 "auth": { 00:17:16.724 "state": "completed", 00:17:16.724 "digest": "sha256", 00:17:16.724 "dhgroup": "ffdhe8192" 00:17:16.724 } 00:17:16.724 } 00:17:16.724 ]' 00:17:16.724 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.724 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.724 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.724 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.724 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.724 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.724 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.724 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.982 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:17:17.550 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.550 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.550 19:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.550 19:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.550 19:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.550 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.550 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.550 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.810 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:17.810 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.810 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.810 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:17.810 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:17.810 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.810 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.810 19:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.810 19:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.810 19:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.810 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.810 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.069 00:17:18.069 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.069 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.069 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.328 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.328 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.328 19:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.328 19:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.328 19:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.328 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.328 { 00:17:18.328 "cntlid": 45, 00:17:18.328 "qid": 0, 00:17:18.328 "state": "enabled", 00:17:18.328 "thread": "nvmf_tgt_poll_group_000", 00:17:18.328 "listen_address": { 00:17:18.328 "trtype": "TCP", 00:17:18.328 "adrfam": "IPv4", 00:17:18.328 "traddr": "10.0.0.2", 00:17:18.328 "trsvcid": "4420" 00:17:18.328 }, 00:17:18.328 "peer_address": { 00:17:18.328 "trtype": "TCP", 00:17:18.328 "adrfam": "IPv4", 00:17:18.328 "traddr": "10.0.0.1", 00:17:18.328 "trsvcid": "36902" 00:17:18.328 }, 00:17:18.328 "auth": { 00:17:18.328 "state": "completed", 00:17:18.328 "digest": "sha256", 00:17:18.328 "dhgroup": "ffdhe8192" 00:17:18.328 } 00:17:18.328 } 00:17:18.328 ]' 00:17:18.328 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.328 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.328 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.587 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.587 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.587 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.587 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.587 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.587 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:17:19.156 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.156 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.156 19:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.156 19:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.156 19:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.156 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.156 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:19.156 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:19.416 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:19.416 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.416 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.416 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:19.416 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:19.416 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.416 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:19.416 19:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.416 19:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.416 19:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.416 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.416 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.984 00:17:19.984 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.984 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.984 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.984 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.984 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.984 19:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.984 19:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.242 19:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.242 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.242 { 00:17:20.242 "cntlid": 47, 00:17:20.242 "qid": 0, 00:17:20.242 "state": "enabled", 00:17:20.242 "thread": "nvmf_tgt_poll_group_000", 00:17:20.242 "listen_address": { 00:17:20.242 "trtype": "TCP", 00:17:20.242 "adrfam": "IPv4", 00:17:20.242 "traddr": "10.0.0.2", 00:17:20.242 "trsvcid": "4420" 00:17:20.242 }, 00:17:20.242 "peer_address": { 00:17:20.242 "trtype": "TCP", 00:17:20.242 "adrfam": "IPv4", 00:17:20.242 "traddr": "10.0.0.1", 00:17:20.242 "trsvcid": "36928" 00:17:20.242 }, 00:17:20.242 "auth": { 00:17:20.242 "state": "completed", 00:17:20.242 "digest": "sha256", 00:17:20.242 "dhgroup": "ffdhe8192" 00:17:20.242 } 00:17:20.242 } 00:17:20.242 ]' 00:17:20.242 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.242 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.242 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.242 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:20.242 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.242 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.242 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.242 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.500 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:21.066 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:21.067 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.067 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.067 19:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.067 19:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.067 19:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.067 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.067 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.326 00:17:21.326 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.326 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.326 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.584 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.584 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.584 19:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.584 19:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.584 19:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.584 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.584 { 00:17:21.584 "cntlid": 49, 00:17:21.584 "qid": 0, 00:17:21.584 "state": "enabled", 00:17:21.584 "thread": "nvmf_tgt_poll_group_000", 00:17:21.584 "listen_address": { 00:17:21.584 "trtype": "TCP", 00:17:21.584 "adrfam": "IPv4", 00:17:21.584 "traddr": "10.0.0.2", 00:17:21.584 "trsvcid": "4420" 00:17:21.584 }, 00:17:21.584 "peer_address": { 00:17:21.584 "trtype": "TCP", 00:17:21.584 "adrfam": "IPv4", 00:17:21.584 "traddr": "10.0.0.1", 00:17:21.584 "trsvcid": "36954" 00:17:21.584 }, 00:17:21.584 "auth": { 00:17:21.584 "state": "completed", 00:17:21.584 "digest": "sha384", 00:17:21.584 "dhgroup": "null" 00:17:21.584 } 00:17:21.584 } 00:17:21.584 ]' 00:17:21.584 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.584 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.584 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.584 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:21.843 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.843 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.843 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.843 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.843 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:17:22.411 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.411 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.411 19:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.411 19:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.411 19:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.411 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.411 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:22.411 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:22.669 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:22.669 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.669 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.669 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:22.669 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:22.669 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.669 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.669 19:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.669 19:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.669 19:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.669 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.669 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.927 00:17:22.927 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.927 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.928 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.186 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.186 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.186 19:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.186 19:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.186 19:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.186 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.186 { 00:17:23.186 "cntlid": 51, 00:17:23.186 "qid": 0, 00:17:23.186 "state": "enabled", 00:17:23.186 "thread": "nvmf_tgt_poll_group_000", 00:17:23.186 "listen_address": { 00:17:23.186 "trtype": "TCP", 00:17:23.186 "adrfam": "IPv4", 00:17:23.186 "traddr": "10.0.0.2", 00:17:23.186 "trsvcid": "4420" 00:17:23.186 }, 00:17:23.186 "peer_address": { 00:17:23.186 "trtype": "TCP", 00:17:23.186 "adrfam": "IPv4", 00:17:23.186 "traddr": "10.0.0.1", 00:17:23.186 "trsvcid": "36984" 00:17:23.186 }, 00:17:23.186 "auth": { 00:17:23.186 "state": "completed", 00:17:23.186 "digest": "sha384", 00:17:23.186 "dhgroup": "null" 00:17:23.186 } 00:17:23.186 } 00:17:23.186 ]' 00:17:23.186 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.186 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.186 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.186 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:23.186 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.186 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.186 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.186 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.444 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:17:24.011 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.011 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.011 19:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.011 19:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.011 19:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.011 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.011 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:24.011 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:24.269 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:24.269 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.269 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.269 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:24.269 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:24.269 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.269 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.269 19:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.269 19:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.269 19:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.269 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.269 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.527 00:17:24.527 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.527 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.527 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.527 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.527 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.527 19:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.527 19:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.527 19:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.527 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.527 { 00:17:24.527 "cntlid": 53, 00:17:24.527 "qid": 0, 00:17:24.527 "state": "enabled", 00:17:24.527 "thread": "nvmf_tgt_poll_group_000", 00:17:24.527 "listen_address": { 00:17:24.527 "trtype": "TCP", 00:17:24.527 "adrfam": "IPv4", 00:17:24.527 "traddr": "10.0.0.2", 00:17:24.527 "trsvcid": "4420" 00:17:24.527 }, 00:17:24.527 "peer_address": { 00:17:24.527 "trtype": "TCP", 00:17:24.527 "adrfam": "IPv4", 00:17:24.527 "traddr": "10.0.0.1", 00:17:24.527 "trsvcid": "37500" 00:17:24.527 }, 00:17:24.527 "auth": { 00:17:24.527 "state": "completed", 00:17:24.527 "digest": "sha384", 00:17:24.527 "dhgroup": "null" 00:17:24.527 } 00:17:24.527 } 00:17:24.527 ]' 00:17:24.527 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.784 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.784 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.784 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:24.784 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.784 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.784 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.784 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.042 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:17:25.611 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.611 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.611 19:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.611 19:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.611 19:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.611 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.611 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.611 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.611 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:25.611 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.611 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.611 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:25.611 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:25.611 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.611 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:25.611 19:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.611 19:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.611 19:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.611 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.611 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.871 00:17:25.871 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.871 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.871 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.131 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.131 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.131 19:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.131 19:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.131 19:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.131 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.131 { 00:17:26.131 "cntlid": 55, 00:17:26.131 "qid": 0, 00:17:26.131 "state": "enabled", 00:17:26.131 "thread": "nvmf_tgt_poll_group_000", 00:17:26.131 "listen_address": { 00:17:26.131 "trtype": "TCP", 00:17:26.131 "adrfam": "IPv4", 00:17:26.131 "traddr": "10.0.0.2", 00:17:26.131 "trsvcid": "4420" 00:17:26.131 }, 00:17:26.131 "peer_address": { 00:17:26.131 "trtype": "TCP", 00:17:26.131 "adrfam": "IPv4", 00:17:26.131 "traddr": "10.0.0.1", 00:17:26.131 "trsvcid": "37528" 00:17:26.131 }, 00:17:26.131 "auth": { 00:17:26.131 "state": "completed", 00:17:26.131 "digest": "sha384", 00:17:26.131 "dhgroup": "null" 00:17:26.131 } 00:17:26.131 } 00:17:26.131 ]' 00:17:26.131 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.131 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.131 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.131 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:26.131 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.131 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.131 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.131 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.390 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:17:26.959 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.959 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.960 19:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.960 19:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.960 19:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.960 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.960 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.960 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:26.960 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:27.219 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:27.219 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.219 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:27.219 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:27.219 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:27.219 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.219 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.219 19:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.219 19:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.219 19:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.219 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.219 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.479 00:17:27.479 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.479 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.479 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.739 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.739 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.739 19:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.739 19:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.739 19:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.739 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.739 { 00:17:27.739 "cntlid": 57, 00:17:27.739 "qid": 0, 00:17:27.739 "state": "enabled", 00:17:27.739 "thread": "nvmf_tgt_poll_group_000", 00:17:27.739 "listen_address": { 00:17:27.739 "trtype": "TCP", 00:17:27.739 "adrfam": "IPv4", 00:17:27.739 "traddr": "10.0.0.2", 00:17:27.739 "trsvcid": "4420" 00:17:27.739 }, 00:17:27.739 "peer_address": { 00:17:27.739 "trtype": "TCP", 00:17:27.739 "adrfam": "IPv4", 00:17:27.739 "traddr": "10.0.0.1", 00:17:27.739 "trsvcid": "37548" 00:17:27.739 }, 00:17:27.739 "auth": { 00:17:27.739 "state": "completed", 00:17:27.739 "digest": "sha384", 00:17:27.739 "dhgroup": "ffdhe2048" 00:17:27.739 } 00:17:27.739 } 00:17:27.739 ]' 00:17:27.739 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.739 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.739 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.739 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:27.739 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.739 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.739 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.739 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.999 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:17:28.568 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.568 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.568 19:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.568 19:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.568 19:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.568 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.568 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:28.568 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:28.828 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:28.828 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.828 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.828 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:28.828 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:28.828 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.828 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.828 19:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.828 19:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.828 19:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.828 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.828 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.828 00:17:29.088 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.088 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.088 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.088 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.088 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.088 19:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.088 19:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.088 19:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.088 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.088 { 00:17:29.088 "cntlid": 59, 00:17:29.088 "qid": 0, 00:17:29.088 "state": "enabled", 00:17:29.088 "thread": "nvmf_tgt_poll_group_000", 00:17:29.088 "listen_address": { 00:17:29.088 "trtype": "TCP", 00:17:29.088 "adrfam": "IPv4", 00:17:29.088 "traddr": "10.0.0.2", 00:17:29.088 "trsvcid": "4420" 00:17:29.088 }, 00:17:29.088 "peer_address": { 00:17:29.088 "trtype": "TCP", 00:17:29.088 "adrfam": "IPv4", 00:17:29.088 "traddr": "10.0.0.1", 00:17:29.088 "trsvcid": "37576" 00:17:29.088 }, 00:17:29.088 "auth": { 00:17:29.088 "state": "completed", 00:17:29.088 "digest": "sha384", 00:17:29.088 "dhgroup": "ffdhe2048" 00:17:29.088 } 00:17:29.088 } 00:17:29.088 ]' 00:17:29.088 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.088 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.088 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.347 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.347 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.347 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.347 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.347 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.347 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:17:29.916 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.916 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.916 19:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.916 19:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.916 19:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.916 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.916 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:29.916 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:30.175 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:30.175 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.175 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:30.175 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:30.175 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:30.175 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.175 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.175 19:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.175 19:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.175 19:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.175 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.175 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.441 00:17:30.441 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.441 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.441 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.699 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.699 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.699 19:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.699 19:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.699 19:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.699 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.699 { 00:17:30.699 "cntlid": 61, 00:17:30.699 "qid": 0, 00:17:30.699 "state": "enabled", 00:17:30.699 "thread": "nvmf_tgt_poll_group_000", 00:17:30.699 "listen_address": { 00:17:30.699 "trtype": "TCP", 00:17:30.699 "adrfam": "IPv4", 00:17:30.699 "traddr": "10.0.0.2", 00:17:30.699 "trsvcid": "4420" 00:17:30.699 }, 00:17:30.699 "peer_address": { 00:17:30.699 "trtype": "TCP", 00:17:30.699 "adrfam": "IPv4", 00:17:30.699 "traddr": "10.0.0.1", 00:17:30.699 "trsvcid": "37606" 00:17:30.699 }, 00:17:30.699 "auth": { 00:17:30.699 "state": "completed", 00:17:30.699 "digest": "sha384", 00:17:30.699 "dhgroup": "ffdhe2048" 00:17:30.699 } 00:17:30.699 } 00:17:30.699 ]' 00:17:30.699 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.699 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.699 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.699 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.699 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.699 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.699 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.699 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.958 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:17:31.526 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.526 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.526 19:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.526 19:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.526 19:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.526 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.526 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.526 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:31.786 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:31.786 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.786 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.786 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:31.786 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.786 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.786 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:31.786 19:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.786 19:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.786 19:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.786 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.786 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.046 00:17:32.046 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.046 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.046 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.046 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.046 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.046 19:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.046 19:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.046 19:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.046 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.046 { 00:17:32.046 "cntlid": 63, 00:17:32.046 "qid": 0, 00:17:32.047 "state": "enabled", 00:17:32.047 "thread": "nvmf_tgt_poll_group_000", 00:17:32.047 "listen_address": { 00:17:32.047 "trtype": "TCP", 00:17:32.047 "adrfam": "IPv4", 00:17:32.047 "traddr": "10.0.0.2", 00:17:32.047 "trsvcid": "4420" 00:17:32.047 }, 00:17:32.047 "peer_address": { 00:17:32.047 "trtype": "TCP", 00:17:32.047 "adrfam": "IPv4", 00:17:32.047 "traddr": "10.0.0.1", 00:17:32.047 "trsvcid": "37646" 00:17:32.047 }, 00:17:32.047 "auth": { 00:17:32.047 "state": "completed", 00:17:32.047 "digest": "sha384", 00:17:32.047 "dhgroup": "ffdhe2048" 00:17:32.047 } 00:17:32.047 } 00:17:32.047 ]' 00:17:32.047 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.306 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.306 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.306 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:32.306 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.306 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.306 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.306 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.566 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.136 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.396 00:17:33.396 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.396 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.396 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.656 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.656 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.656 19:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.656 19:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.656 19:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.656 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.656 { 00:17:33.656 "cntlid": 65, 00:17:33.656 "qid": 0, 00:17:33.656 "state": "enabled", 00:17:33.656 "thread": "nvmf_tgt_poll_group_000", 00:17:33.656 "listen_address": { 00:17:33.656 "trtype": "TCP", 00:17:33.656 "adrfam": "IPv4", 00:17:33.656 "traddr": "10.0.0.2", 00:17:33.656 "trsvcid": "4420" 00:17:33.656 }, 00:17:33.656 "peer_address": { 00:17:33.656 "trtype": "TCP", 00:17:33.656 "adrfam": "IPv4", 00:17:33.656 "traddr": "10.0.0.1", 00:17:33.656 "trsvcid": "37664" 00:17:33.656 }, 00:17:33.656 "auth": { 00:17:33.656 "state": "completed", 00:17:33.656 "digest": "sha384", 00:17:33.656 "dhgroup": "ffdhe3072" 00:17:33.656 } 00:17:33.656 } 00:17:33.656 ]' 00:17:33.656 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.656 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.656 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.656 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:33.656 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.915 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.915 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.915 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.915 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:17:34.512 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.512 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.512 19:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.512 19:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.512 19:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.512 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.512 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:34.512 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:34.772 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:34.772 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.772 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.772 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:34.772 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:34.772 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.772 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.772 19:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.772 19:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.772 19:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.772 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.772 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.032 00:17:35.032 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.032 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.032 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.032 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.032 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.032 19:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.032 19:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.291 19:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.291 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.291 { 00:17:35.291 "cntlid": 67, 00:17:35.291 "qid": 0, 00:17:35.291 "state": "enabled", 00:17:35.291 "thread": "nvmf_tgt_poll_group_000", 00:17:35.291 "listen_address": { 00:17:35.291 "trtype": "TCP", 00:17:35.291 "adrfam": "IPv4", 00:17:35.291 "traddr": "10.0.0.2", 00:17:35.291 "trsvcid": "4420" 00:17:35.291 }, 00:17:35.291 "peer_address": { 00:17:35.291 "trtype": "TCP", 00:17:35.291 "adrfam": "IPv4", 00:17:35.291 "traddr": "10.0.0.1", 00:17:35.291 "trsvcid": "58980" 00:17:35.291 }, 00:17:35.291 "auth": { 00:17:35.291 "state": "completed", 00:17:35.291 "digest": "sha384", 00:17:35.291 "dhgroup": "ffdhe3072" 00:17:35.291 } 00:17:35.291 } 00:17:35.291 ]' 00:17:35.291 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.291 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.291 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.291 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.291 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.291 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.291 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.291 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.552 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.122 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.381 00:17:36.381 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.381 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.381 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.641 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.641 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.641 19:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.641 19:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.641 19:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.641 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.641 { 00:17:36.641 "cntlid": 69, 00:17:36.641 "qid": 0, 00:17:36.641 "state": "enabled", 00:17:36.641 "thread": "nvmf_tgt_poll_group_000", 00:17:36.641 "listen_address": { 00:17:36.641 "trtype": "TCP", 00:17:36.641 "adrfam": "IPv4", 00:17:36.641 "traddr": "10.0.0.2", 00:17:36.641 "trsvcid": "4420" 00:17:36.641 }, 00:17:36.641 "peer_address": { 00:17:36.641 "trtype": "TCP", 00:17:36.641 "adrfam": "IPv4", 00:17:36.641 "traddr": "10.0.0.1", 00:17:36.641 "trsvcid": "59004" 00:17:36.641 }, 00:17:36.641 "auth": { 00:17:36.641 "state": "completed", 00:17:36.641 "digest": "sha384", 00:17:36.641 "dhgroup": "ffdhe3072" 00:17:36.641 } 00:17:36.641 } 00:17:36.641 ]' 00:17:36.641 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.641 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.641 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.641 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.641 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.901 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.901 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.901 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.901 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:17:37.469 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.469 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.469 19:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.469 19:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.469 19:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.469 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.469 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.469 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:37.728 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:37.728 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.728 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.728 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:37.728 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:37.728 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.728 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:37.728 19:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.728 19:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.728 19:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.728 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.728 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.988 00:17:37.988 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.988 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.988 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.247 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.247 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.247 19:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.247 19:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.247 19:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.247 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.247 { 00:17:38.247 "cntlid": 71, 00:17:38.247 "qid": 0, 00:17:38.247 "state": "enabled", 00:17:38.247 "thread": "nvmf_tgt_poll_group_000", 00:17:38.247 "listen_address": { 00:17:38.247 "trtype": "TCP", 00:17:38.247 "adrfam": "IPv4", 00:17:38.247 "traddr": "10.0.0.2", 00:17:38.247 "trsvcid": "4420" 00:17:38.247 }, 00:17:38.247 "peer_address": { 00:17:38.247 "trtype": "TCP", 00:17:38.247 "adrfam": "IPv4", 00:17:38.247 "traddr": "10.0.0.1", 00:17:38.247 "trsvcid": "59044" 00:17:38.247 }, 00:17:38.247 "auth": { 00:17:38.247 "state": "completed", 00:17:38.247 "digest": "sha384", 00:17:38.247 "dhgroup": "ffdhe3072" 00:17:38.247 } 00:17:38.247 } 00:17:38.247 ]' 00:17:38.248 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.248 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.248 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.248 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.248 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.248 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.248 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.248 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.507 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:17:39.077 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.077 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.077 19:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.077 19:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.077 19:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.077 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.077 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.077 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:39.077 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:39.337 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:39.337 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.337 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:39.337 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:39.337 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:39.337 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.337 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.337 19:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.337 19:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.337 19:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.337 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.337 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.596 00:17:39.596 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.596 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.596 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.856 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.856 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.856 19:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.856 19:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.856 19:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.856 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.856 { 00:17:39.856 "cntlid": 73, 00:17:39.856 "qid": 0, 00:17:39.856 "state": "enabled", 00:17:39.856 "thread": "nvmf_tgt_poll_group_000", 00:17:39.856 "listen_address": { 00:17:39.856 "trtype": "TCP", 00:17:39.856 "adrfam": "IPv4", 00:17:39.856 "traddr": "10.0.0.2", 00:17:39.856 "trsvcid": "4420" 00:17:39.856 }, 00:17:39.856 "peer_address": { 00:17:39.856 "trtype": "TCP", 00:17:39.856 "adrfam": "IPv4", 00:17:39.856 "traddr": "10.0.0.1", 00:17:39.856 "trsvcid": "59088" 00:17:39.856 }, 00:17:39.856 "auth": { 00:17:39.856 "state": "completed", 00:17:39.856 "digest": "sha384", 00:17:39.856 "dhgroup": "ffdhe4096" 00:17:39.856 } 00:17:39.856 } 00:17:39.856 ]' 00:17:39.856 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.856 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.856 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.856 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.856 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.856 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.856 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.856 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.116 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:17:40.686 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.686 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.686 19:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.686 19:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.686 19:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.686 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.686 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.686 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.946 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:40.946 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.946 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.946 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:40.946 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:40.946 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.947 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.947 19:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.947 19:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.947 19:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.947 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.947 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.206 00:17:41.206 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.206 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.206 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.206 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.206 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.206 19:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.206 19:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.207 19:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.207 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.207 { 00:17:41.207 "cntlid": 75, 00:17:41.207 "qid": 0, 00:17:41.207 "state": "enabled", 00:17:41.207 "thread": "nvmf_tgt_poll_group_000", 00:17:41.207 "listen_address": { 00:17:41.207 "trtype": "TCP", 00:17:41.207 "adrfam": "IPv4", 00:17:41.207 "traddr": "10.0.0.2", 00:17:41.207 "trsvcid": "4420" 00:17:41.207 }, 00:17:41.207 "peer_address": { 00:17:41.207 "trtype": "TCP", 00:17:41.207 "adrfam": "IPv4", 00:17:41.207 "traddr": "10.0.0.1", 00:17:41.207 "trsvcid": "59112" 00:17:41.207 }, 00:17:41.207 "auth": { 00:17:41.207 "state": "completed", 00:17:41.207 "digest": "sha384", 00:17:41.207 "dhgroup": "ffdhe4096" 00:17:41.207 } 00:17:41.207 } 00:17:41.207 ]' 00:17:41.207 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.465 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.465 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.465 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:41.465 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.465 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.465 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.465 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.724 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.293 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.294 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.553 00:17:42.553 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.553 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.553 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.813 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.813 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.813 19:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.813 19:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.813 19:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.813 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.813 { 00:17:42.813 "cntlid": 77, 00:17:42.813 "qid": 0, 00:17:42.813 "state": "enabled", 00:17:42.813 "thread": "nvmf_tgt_poll_group_000", 00:17:42.813 "listen_address": { 00:17:42.813 "trtype": "TCP", 00:17:42.813 "adrfam": "IPv4", 00:17:42.813 "traddr": "10.0.0.2", 00:17:42.813 "trsvcid": "4420" 00:17:42.813 }, 00:17:42.813 "peer_address": { 00:17:42.813 "trtype": "TCP", 00:17:42.813 "adrfam": "IPv4", 00:17:42.813 "traddr": "10.0.0.1", 00:17:42.813 "trsvcid": "59128" 00:17:42.813 }, 00:17:42.813 "auth": { 00:17:42.813 "state": "completed", 00:17:42.813 "digest": "sha384", 00:17:42.813 "dhgroup": "ffdhe4096" 00:17:42.813 } 00:17:42.813 } 00:17:42.813 ]' 00:17:42.813 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.813 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.813 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.073 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.073 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.073 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.073 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.073 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.073 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:17:43.642 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.642 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.642 19:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.642 19:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.642 19:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.642 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.642 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:43.642 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:43.902 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:43.902 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.902 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:43.902 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:43.902 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:43.902 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.902 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:43.902 19:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.902 19:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.902 19:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.902 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.902 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.161 00:17:44.161 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.161 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.161 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.421 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.421 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.421 19:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.421 19:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.421 19:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.421 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.421 { 00:17:44.421 "cntlid": 79, 00:17:44.421 "qid": 0, 00:17:44.421 "state": "enabled", 00:17:44.421 "thread": "nvmf_tgt_poll_group_000", 00:17:44.421 "listen_address": { 00:17:44.421 "trtype": "TCP", 00:17:44.421 "adrfam": "IPv4", 00:17:44.421 "traddr": "10.0.0.2", 00:17:44.421 "trsvcid": "4420" 00:17:44.421 }, 00:17:44.421 "peer_address": { 00:17:44.421 "trtype": "TCP", 00:17:44.421 "adrfam": "IPv4", 00:17:44.421 "traddr": "10.0.0.1", 00:17:44.421 "trsvcid": "59160" 00:17:44.421 }, 00:17:44.421 "auth": { 00:17:44.421 "state": "completed", 00:17:44.421 "digest": "sha384", 00:17:44.421 "dhgroup": "ffdhe4096" 00:17:44.421 } 00:17:44.421 } 00:17:44.421 ]' 00:17:44.421 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.421 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.421 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.421 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.421 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.680 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.680 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.680 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.680 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:17:45.248 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.248 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.248 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.248 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.248 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.248 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.248 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.248 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:45.248 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:45.508 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:45.508 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.508 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:45.508 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:45.508 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:45.508 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.508 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.508 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.508 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.508 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.508 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.508 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.767 00:17:45.767 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.767 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.767 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.025 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.025 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.025 19:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.025 19:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.025 19:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.025 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.025 { 00:17:46.025 "cntlid": 81, 00:17:46.025 "qid": 0, 00:17:46.025 "state": "enabled", 00:17:46.025 "thread": "nvmf_tgt_poll_group_000", 00:17:46.025 "listen_address": { 00:17:46.025 "trtype": "TCP", 00:17:46.025 "adrfam": "IPv4", 00:17:46.025 "traddr": "10.0.0.2", 00:17:46.025 "trsvcid": "4420" 00:17:46.025 }, 00:17:46.025 "peer_address": { 00:17:46.025 "trtype": "TCP", 00:17:46.025 "adrfam": "IPv4", 00:17:46.025 "traddr": "10.0.0.1", 00:17:46.025 "trsvcid": "38952" 00:17:46.025 }, 00:17:46.025 "auth": { 00:17:46.025 "state": "completed", 00:17:46.025 "digest": "sha384", 00:17:46.025 "dhgroup": "ffdhe6144" 00:17:46.025 } 00:17:46.025 } 00:17:46.025 ]' 00:17:46.025 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.025 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.025 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.285 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.285 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.285 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.285 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.285 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.285 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:17:46.853 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.853 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.853 19:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.853 19:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.853 19:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.853 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.853 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:46.853 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:47.113 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:47.113 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.113 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:47.113 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:47.113 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:47.113 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.113 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.113 19:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.113 19:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.113 19:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.113 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.113 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.371 00:17:47.371 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.371 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.371 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.630 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.630 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.630 19:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.630 19:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.630 19:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.630 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.630 { 00:17:47.630 "cntlid": 83, 00:17:47.630 "qid": 0, 00:17:47.630 "state": "enabled", 00:17:47.630 "thread": "nvmf_tgt_poll_group_000", 00:17:47.630 "listen_address": { 00:17:47.630 "trtype": "TCP", 00:17:47.630 "adrfam": "IPv4", 00:17:47.630 "traddr": "10.0.0.2", 00:17:47.630 "trsvcid": "4420" 00:17:47.630 }, 00:17:47.630 "peer_address": { 00:17:47.630 "trtype": "TCP", 00:17:47.630 "adrfam": "IPv4", 00:17:47.630 "traddr": "10.0.0.1", 00:17:47.630 "trsvcid": "38992" 00:17:47.630 }, 00:17:47.630 "auth": { 00:17:47.630 "state": "completed", 00:17:47.630 "digest": "sha384", 00:17:47.630 "dhgroup": "ffdhe6144" 00:17:47.630 } 00:17:47.630 } 00:17:47.630 ]' 00:17:47.630 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.630 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.630 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.889 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:47.889 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.889 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.889 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.889 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.148 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:17:48.714 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.714 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.714 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.714 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.714 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.714 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.714 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:48.714 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:48.714 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:48.714 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.714 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:48.715 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:48.715 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:48.715 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.715 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.715 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.715 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.715 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.715 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.715 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.281 00:17:49.281 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.281 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.281 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.281 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.281 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.281 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.281 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.281 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.281 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.281 { 00:17:49.281 "cntlid": 85, 00:17:49.281 "qid": 0, 00:17:49.281 "state": "enabled", 00:17:49.281 "thread": "nvmf_tgt_poll_group_000", 00:17:49.281 "listen_address": { 00:17:49.281 "trtype": "TCP", 00:17:49.281 "adrfam": "IPv4", 00:17:49.281 "traddr": "10.0.0.2", 00:17:49.281 "trsvcid": "4420" 00:17:49.281 }, 00:17:49.281 "peer_address": { 00:17:49.281 "trtype": "TCP", 00:17:49.281 "adrfam": "IPv4", 00:17:49.281 "traddr": "10.0.0.1", 00:17:49.281 "trsvcid": "39022" 00:17:49.281 }, 00:17:49.281 "auth": { 00:17:49.281 "state": "completed", 00:17:49.281 "digest": "sha384", 00:17:49.281 "dhgroup": "ffdhe6144" 00:17:49.281 } 00:17:49.281 } 00:17:49.281 ]' 00:17:49.281 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.281 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.281 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.539 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.539 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.539 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.539 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.539 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.539 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:17:50.108 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.108 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.108 19:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.108 19:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.108 19:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.108 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.108 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:50.108 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:50.366 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:50.366 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.366 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:50.366 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:50.366 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:50.366 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.366 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:50.366 19:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.366 19:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.366 19:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.366 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.366 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.624 00:17:50.883 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.883 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.883 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.883 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.883 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.883 19:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.883 19:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.883 19:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.883 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.883 { 00:17:50.883 "cntlid": 87, 00:17:50.883 "qid": 0, 00:17:50.883 "state": "enabled", 00:17:50.883 "thread": "nvmf_tgt_poll_group_000", 00:17:50.883 "listen_address": { 00:17:50.883 "trtype": "TCP", 00:17:50.883 "adrfam": "IPv4", 00:17:50.883 "traddr": "10.0.0.2", 00:17:50.883 "trsvcid": "4420" 00:17:50.883 }, 00:17:50.883 "peer_address": { 00:17:50.883 "trtype": "TCP", 00:17:50.883 "adrfam": "IPv4", 00:17:50.883 "traddr": "10.0.0.1", 00:17:50.883 "trsvcid": "39048" 00:17:50.883 }, 00:17:50.883 "auth": { 00:17:50.883 "state": "completed", 00:17:50.883 "digest": "sha384", 00:17:50.883 "dhgroup": "ffdhe6144" 00:17:50.883 } 00:17:50.883 } 00:17:50.883 ]' 00:17:50.883 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.883 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.883 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.143 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.143 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.143 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.143 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.143 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.143 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:17:51.712 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.712 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.712 19:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.712 19:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.712 19:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.712 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.712 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.712 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.712 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.972 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:51.972 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.972 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:51.972 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:51.972 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:51.972 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.972 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.972 19:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.972 19:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.972 19:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.972 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.972 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.541 00:17:52.541 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.541 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.541 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.541 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.541 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.541 19:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.541 19:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.801 19:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.801 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.801 { 00:17:52.801 "cntlid": 89, 00:17:52.801 "qid": 0, 00:17:52.801 "state": "enabled", 00:17:52.801 "thread": "nvmf_tgt_poll_group_000", 00:17:52.801 "listen_address": { 00:17:52.801 "trtype": "TCP", 00:17:52.801 "adrfam": "IPv4", 00:17:52.801 "traddr": "10.0.0.2", 00:17:52.801 "trsvcid": "4420" 00:17:52.801 }, 00:17:52.801 "peer_address": { 00:17:52.801 "trtype": "TCP", 00:17:52.801 "adrfam": "IPv4", 00:17:52.801 "traddr": "10.0.0.1", 00:17:52.801 "trsvcid": "39068" 00:17:52.801 }, 00:17:52.801 "auth": { 00:17:52.801 "state": "completed", 00:17:52.801 "digest": "sha384", 00:17:52.801 "dhgroup": "ffdhe8192" 00:17:52.801 } 00:17:52.801 } 00:17:52.801 ]' 00:17:52.801 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.801 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.801 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.801 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.801 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.801 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.801 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.801 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.061 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:17:53.630 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.630 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.630 19:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.630 19:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.630 19:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.630 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.630 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.630 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.630 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:53.630 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.630 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:53.630 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:53.630 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:53.630 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.630 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.630 19:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.630 19:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.630 19:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.630 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.630 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.199 00:17:54.199 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.199 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.199 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.458 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.458 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.458 19:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.458 19:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.458 19:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.458 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.458 { 00:17:54.458 "cntlid": 91, 00:17:54.458 "qid": 0, 00:17:54.458 "state": "enabled", 00:17:54.458 "thread": "nvmf_tgt_poll_group_000", 00:17:54.458 "listen_address": { 00:17:54.458 "trtype": "TCP", 00:17:54.458 "adrfam": "IPv4", 00:17:54.458 "traddr": "10.0.0.2", 00:17:54.458 "trsvcid": "4420" 00:17:54.458 }, 00:17:54.458 "peer_address": { 00:17:54.458 "trtype": "TCP", 00:17:54.458 "adrfam": "IPv4", 00:17:54.458 "traddr": "10.0.0.1", 00:17:54.458 "trsvcid": "39098" 00:17:54.458 }, 00:17:54.458 "auth": { 00:17:54.458 "state": "completed", 00:17:54.458 "digest": "sha384", 00:17:54.458 "dhgroup": "ffdhe8192" 00:17:54.458 } 00:17:54.458 } 00:17:54.458 ]' 00:17:54.458 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.458 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.458 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.458 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.458 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.458 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.458 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.459 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.718 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:17:55.288 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.288 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.288 19:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.288 19:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.288 19:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.288 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.288 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:55.288 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:55.548 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:55.548 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.548 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:55.548 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:55.548 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:55.548 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.548 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.548 19:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.548 19:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.548 19:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.548 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.548 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.807 00:17:56.066 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.067 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.067 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.067 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.067 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.067 19:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.067 19:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.067 19:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.067 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.067 { 00:17:56.067 "cntlid": 93, 00:17:56.067 "qid": 0, 00:17:56.067 "state": "enabled", 00:17:56.067 "thread": "nvmf_tgt_poll_group_000", 00:17:56.067 "listen_address": { 00:17:56.067 "trtype": "TCP", 00:17:56.067 "adrfam": "IPv4", 00:17:56.067 "traddr": "10.0.0.2", 00:17:56.067 "trsvcid": "4420" 00:17:56.067 }, 00:17:56.067 "peer_address": { 00:17:56.067 "trtype": "TCP", 00:17:56.067 "adrfam": "IPv4", 00:17:56.067 "traddr": "10.0.0.1", 00:17:56.067 "trsvcid": "57490" 00:17:56.067 }, 00:17:56.067 "auth": { 00:17:56.067 "state": "completed", 00:17:56.067 "digest": "sha384", 00:17:56.067 "dhgroup": "ffdhe8192" 00:17:56.067 } 00:17:56.067 } 00:17:56.067 ]' 00:17:56.067 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.326 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.326 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.326 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.326 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.326 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.326 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.326 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.327 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:17:56.896 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.156 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.724 00:17:57.724 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.724 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.724 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.984 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.984 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.984 19:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.984 19:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.984 19:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.984 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.984 { 00:17:57.984 "cntlid": 95, 00:17:57.985 "qid": 0, 00:17:57.985 "state": "enabled", 00:17:57.985 "thread": "nvmf_tgt_poll_group_000", 00:17:57.985 "listen_address": { 00:17:57.985 "trtype": "TCP", 00:17:57.985 "adrfam": "IPv4", 00:17:57.985 "traddr": "10.0.0.2", 00:17:57.985 "trsvcid": "4420" 00:17:57.985 }, 00:17:57.985 "peer_address": { 00:17:57.985 "trtype": "TCP", 00:17:57.985 "adrfam": "IPv4", 00:17:57.985 "traddr": "10.0.0.1", 00:17:57.985 "trsvcid": "57518" 00:17:57.985 }, 00:17:57.985 "auth": { 00:17:57.985 "state": "completed", 00:17:57.985 "digest": "sha384", 00:17:57.985 "dhgroup": "ffdhe8192" 00:17:57.985 } 00:17:57.985 } 00:17:57.985 ]' 00:17:57.985 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.985 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.985 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.985 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.985 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.985 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.985 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.985 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.244 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:17:58.812 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.812 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:58.812 19:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.812 19:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.812 19:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.812 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:58.812 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.812 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.812 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:58.812 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.070 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:59.070 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.070 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.070 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:59.070 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.070 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.070 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.070 19:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.070 19:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.070 19:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.070 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.070 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.070 00:17:59.329 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.329 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.329 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.329 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.329 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.329 19:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.329 19:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.329 19:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.329 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.329 { 00:17:59.329 "cntlid": 97, 00:17:59.329 "qid": 0, 00:17:59.329 "state": "enabled", 00:17:59.329 "thread": "nvmf_tgt_poll_group_000", 00:17:59.329 "listen_address": { 00:17:59.329 "trtype": "TCP", 00:17:59.329 "adrfam": "IPv4", 00:17:59.329 "traddr": "10.0.0.2", 00:17:59.329 "trsvcid": "4420" 00:17:59.329 }, 00:17:59.329 "peer_address": { 00:17:59.329 "trtype": "TCP", 00:17:59.329 "adrfam": "IPv4", 00:17:59.329 "traddr": "10.0.0.1", 00:17:59.329 "trsvcid": "57546" 00:17:59.329 }, 00:17:59.329 "auth": { 00:17:59.329 "state": "completed", 00:17:59.329 "digest": "sha512", 00:17:59.329 "dhgroup": "null" 00:17:59.329 } 00:17:59.329 } 00:17:59.329 ]' 00:17:59.329 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.329 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.329 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.588 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:59.589 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.589 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.589 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.589 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.589 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:18:00.156 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.156 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.156 19:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.156 19:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.156 19:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.156 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.156 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:00.156 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:00.415 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:00.415 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.415 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.415 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:00.415 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:00.415 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.415 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.415 19:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.415 19:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.415 19:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.415 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.415 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.674 00:18:00.674 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.674 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.674 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.933 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.933 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.933 19:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.933 19:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.933 19:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.933 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.933 { 00:18:00.933 "cntlid": 99, 00:18:00.933 "qid": 0, 00:18:00.933 "state": "enabled", 00:18:00.933 "thread": "nvmf_tgt_poll_group_000", 00:18:00.933 "listen_address": { 00:18:00.933 "trtype": "TCP", 00:18:00.933 "adrfam": "IPv4", 00:18:00.933 "traddr": "10.0.0.2", 00:18:00.933 "trsvcid": "4420" 00:18:00.933 }, 00:18:00.933 "peer_address": { 00:18:00.933 "trtype": "TCP", 00:18:00.933 "adrfam": "IPv4", 00:18:00.933 "traddr": "10.0.0.1", 00:18:00.933 "trsvcid": "57576" 00:18:00.933 }, 00:18:00.933 "auth": { 00:18:00.933 "state": "completed", 00:18:00.933 "digest": "sha512", 00:18:00.933 "dhgroup": "null" 00:18:00.933 } 00:18:00.933 } 00:18:00.933 ]' 00:18:00.933 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.933 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.933 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.933 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:00.933 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.933 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.933 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.933 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.194 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:18:01.764 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.764 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.764 19:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.764 19:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.764 19:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.764 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.764 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:01.764 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:02.023 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:02.023 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.023 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.023 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:02.023 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:02.023 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.023 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.023 19:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.023 19:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.023 19:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.023 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.023 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.284 00:18:02.284 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.284 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.284 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.284 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.284 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.284 19:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.284 19:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.284 19:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.284 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.284 { 00:18:02.284 "cntlid": 101, 00:18:02.284 "qid": 0, 00:18:02.284 "state": "enabled", 00:18:02.284 "thread": "nvmf_tgt_poll_group_000", 00:18:02.284 "listen_address": { 00:18:02.284 "trtype": "TCP", 00:18:02.284 "adrfam": "IPv4", 00:18:02.284 "traddr": "10.0.0.2", 00:18:02.284 "trsvcid": "4420" 00:18:02.284 }, 00:18:02.284 "peer_address": { 00:18:02.284 "trtype": "TCP", 00:18:02.284 "adrfam": "IPv4", 00:18:02.284 "traddr": "10.0.0.1", 00:18:02.284 "trsvcid": "57608" 00:18:02.284 }, 00:18:02.284 "auth": { 00:18:02.284 "state": "completed", 00:18:02.284 "digest": "sha512", 00:18:02.284 "dhgroup": "null" 00:18:02.284 } 00:18:02.284 } 00:18:02.284 ]' 00:18:02.284 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.543 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.543 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.543 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:02.543 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.543 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.543 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.543 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.803 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.371 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.630 00:18:03.630 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.630 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.630 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.889 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.889 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.889 19:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.889 19:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.889 19:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.889 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.889 { 00:18:03.889 "cntlid": 103, 00:18:03.889 "qid": 0, 00:18:03.889 "state": "enabled", 00:18:03.889 "thread": "nvmf_tgt_poll_group_000", 00:18:03.889 "listen_address": { 00:18:03.889 "trtype": "TCP", 00:18:03.889 "adrfam": "IPv4", 00:18:03.889 "traddr": "10.0.0.2", 00:18:03.889 "trsvcid": "4420" 00:18:03.889 }, 00:18:03.889 "peer_address": { 00:18:03.889 "trtype": "TCP", 00:18:03.889 "adrfam": "IPv4", 00:18:03.889 "traddr": "10.0.0.1", 00:18:03.890 "trsvcid": "57618" 00:18:03.890 }, 00:18:03.890 "auth": { 00:18:03.890 "state": "completed", 00:18:03.890 "digest": "sha512", 00:18:03.890 "dhgroup": "null" 00:18:03.890 } 00:18:03.890 } 00:18:03.890 ]' 00:18:03.890 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.890 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.890 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.890 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:03.890 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.153 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.153 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.153 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.153 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:18:04.720 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.720 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.720 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.720 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.720 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.720 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.720 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.720 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.720 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.979 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:04.979 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.979 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:04.979 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:04.979 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:04.979 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.979 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.979 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.979 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.979 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.979 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.979 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.238 00:18:05.238 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.238 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.238 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.497 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.497 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.497 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.497 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.497 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.497 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.497 { 00:18:05.497 "cntlid": 105, 00:18:05.497 "qid": 0, 00:18:05.497 "state": "enabled", 00:18:05.497 "thread": "nvmf_tgt_poll_group_000", 00:18:05.497 "listen_address": { 00:18:05.497 "trtype": "TCP", 00:18:05.497 "adrfam": "IPv4", 00:18:05.498 "traddr": "10.0.0.2", 00:18:05.498 "trsvcid": "4420" 00:18:05.498 }, 00:18:05.498 "peer_address": { 00:18:05.498 "trtype": "TCP", 00:18:05.498 "adrfam": "IPv4", 00:18:05.498 "traddr": "10.0.0.1", 00:18:05.498 "trsvcid": "49074" 00:18:05.498 }, 00:18:05.498 "auth": { 00:18:05.498 "state": "completed", 00:18:05.498 "digest": "sha512", 00:18:05.498 "dhgroup": "ffdhe2048" 00:18:05.498 } 00:18:05.498 } 00:18:05.498 ]' 00:18:05.498 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.498 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.498 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.498 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.498 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.498 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.498 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.498 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.757 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:18:06.324 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.324 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.324 19:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.324 19:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.324 19:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.324 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.324 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:06.324 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:06.583 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:06.583 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.583 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:06.583 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:06.583 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:06.583 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.583 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.583 19:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.583 19:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.583 19:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.583 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.583 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.583 00:18:06.842 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.842 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.842 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.842 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.842 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.842 19:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.842 19:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.842 19:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.842 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.842 { 00:18:06.842 "cntlid": 107, 00:18:06.842 "qid": 0, 00:18:06.842 "state": "enabled", 00:18:06.842 "thread": "nvmf_tgt_poll_group_000", 00:18:06.842 "listen_address": { 00:18:06.842 "trtype": "TCP", 00:18:06.842 "adrfam": "IPv4", 00:18:06.842 "traddr": "10.0.0.2", 00:18:06.842 "trsvcid": "4420" 00:18:06.842 }, 00:18:06.842 "peer_address": { 00:18:06.842 "trtype": "TCP", 00:18:06.842 "adrfam": "IPv4", 00:18:06.842 "traddr": "10.0.0.1", 00:18:06.842 "trsvcid": "49090" 00:18:06.842 }, 00:18:06.842 "auth": { 00:18:06.842 "state": "completed", 00:18:06.842 "digest": "sha512", 00:18:06.842 "dhgroup": "ffdhe2048" 00:18:06.842 } 00:18:06.842 } 00:18:06.842 ]' 00:18:06.842 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.842 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.842 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.102 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.102 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.102 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.102 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.102 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.102 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:18:07.670 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.670 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.670 19:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.670 19:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.929 19:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.929 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.929 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:07.929 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:07.929 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:07.929 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.929 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.929 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:07.930 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:07.930 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.930 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.930 19:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.930 19:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.930 19:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.930 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.930 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.189 00:18:08.189 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.189 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.189 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.449 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.449 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.449 19:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.449 19:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.449 19:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.449 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.449 { 00:18:08.449 "cntlid": 109, 00:18:08.449 "qid": 0, 00:18:08.449 "state": "enabled", 00:18:08.449 "thread": "nvmf_tgt_poll_group_000", 00:18:08.449 "listen_address": { 00:18:08.449 "trtype": "TCP", 00:18:08.449 "adrfam": "IPv4", 00:18:08.449 "traddr": "10.0.0.2", 00:18:08.449 "trsvcid": "4420" 00:18:08.449 }, 00:18:08.449 "peer_address": { 00:18:08.449 "trtype": "TCP", 00:18:08.449 "adrfam": "IPv4", 00:18:08.449 "traddr": "10.0.0.1", 00:18:08.449 "trsvcid": "49120" 00:18:08.449 }, 00:18:08.449 "auth": { 00:18:08.449 "state": "completed", 00:18:08.449 "digest": "sha512", 00:18:08.449 "dhgroup": "ffdhe2048" 00:18:08.449 } 00:18:08.449 } 00:18:08.449 ]' 00:18:08.449 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.449 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.449 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.449 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:08.449 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.449 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.449 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.449 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.708 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:18:09.277 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.277 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:09.277 19:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.277 19:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.277 19:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.277 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.278 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:09.278 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:09.537 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:09.537 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.537 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.537 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:09.537 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:09.537 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.537 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:09.537 19:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.537 19:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.537 19:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.537 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.537 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.797 00:18:09.797 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.797 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.797 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.057 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.057 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.057 19:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.057 19:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.057 19:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.057 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.057 { 00:18:10.057 "cntlid": 111, 00:18:10.057 "qid": 0, 00:18:10.057 "state": "enabled", 00:18:10.057 "thread": "nvmf_tgt_poll_group_000", 00:18:10.057 "listen_address": { 00:18:10.057 "trtype": "TCP", 00:18:10.057 "adrfam": "IPv4", 00:18:10.057 "traddr": "10.0.0.2", 00:18:10.057 "trsvcid": "4420" 00:18:10.057 }, 00:18:10.057 "peer_address": { 00:18:10.057 "trtype": "TCP", 00:18:10.057 "adrfam": "IPv4", 00:18:10.057 "traddr": "10.0.0.1", 00:18:10.057 "trsvcid": "49142" 00:18:10.057 }, 00:18:10.057 "auth": { 00:18:10.057 "state": "completed", 00:18:10.057 "digest": "sha512", 00:18:10.057 "dhgroup": "ffdhe2048" 00:18:10.057 } 00:18:10.057 } 00:18:10.057 ]' 00:18:10.057 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.057 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.057 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.057 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:10.057 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.057 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.057 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.057 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.316 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.884 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.143 00:18:11.143 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.143 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.143 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.402 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.402 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.402 19:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.402 19:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.402 19:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.402 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.402 { 00:18:11.402 "cntlid": 113, 00:18:11.402 "qid": 0, 00:18:11.402 "state": "enabled", 00:18:11.402 "thread": "nvmf_tgt_poll_group_000", 00:18:11.402 "listen_address": { 00:18:11.402 "trtype": "TCP", 00:18:11.402 "adrfam": "IPv4", 00:18:11.402 "traddr": "10.0.0.2", 00:18:11.402 "trsvcid": "4420" 00:18:11.402 }, 00:18:11.402 "peer_address": { 00:18:11.402 "trtype": "TCP", 00:18:11.402 "adrfam": "IPv4", 00:18:11.402 "traddr": "10.0.0.1", 00:18:11.402 "trsvcid": "49168" 00:18:11.402 }, 00:18:11.402 "auth": { 00:18:11.402 "state": "completed", 00:18:11.402 "digest": "sha512", 00:18:11.402 "dhgroup": "ffdhe3072" 00:18:11.402 } 00:18:11.402 } 00:18:11.402 ]' 00:18:11.402 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.402 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.402 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.661 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.661 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.661 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.661 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.661 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.661 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:18:12.230 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.230 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.230 19:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.230 19:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.230 19:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.230 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.230 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.230 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.490 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:12.490 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.490 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.490 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:12.490 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:12.490 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.490 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.490 19:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.490 19:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.490 19:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.490 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.490 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.749 00:18:12.749 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.749 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.749 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.009 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.009 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.009 19:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.009 19:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.009 19:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.009 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.009 { 00:18:13.009 "cntlid": 115, 00:18:13.009 "qid": 0, 00:18:13.009 "state": "enabled", 00:18:13.009 "thread": "nvmf_tgt_poll_group_000", 00:18:13.009 "listen_address": { 00:18:13.009 "trtype": "TCP", 00:18:13.009 "adrfam": "IPv4", 00:18:13.009 "traddr": "10.0.0.2", 00:18:13.009 "trsvcid": "4420" 00:18:13.009 }, 00:18:13.009 "peer_address": { 00:18:13.009 "trtype": "TCP", 00:18:13.009 "adrfam": "IPv4", 00:18:13.009 "traddr": "10.0.0.1", 00:18:13.009 "trsvcid": "49204" 00:18:13.009 }, 00:18:13.009 "auth": { 00:18:13.009 "state": "completed", 00:18:13.009 "digest": "sha512", 00:18:13.009 "dhgroup": "ffdhe3072" 00:18:13.009 } 00:18:13.009 } 00:18:13.009 ]' 00:18:13.009 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.009 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.009 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.009 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.009 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.009 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.009 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.009 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.268 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:18:13.837 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.837 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:13.837 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.837 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.837 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.837 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.837 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.837 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:14.097 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:14.097 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.097 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:14.097 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:14.097 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:14.097 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.097 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.097 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.097 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.097 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.097 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.097 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.356 00:18:14.356 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.356 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.356 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.356 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.356 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.356 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.356 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.356 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.356 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.356 { 00:18:14.356 "cntlid": 117, 00:18:14.356 "qid": 0, 00:18:14.356 "state": "enabled", 00:18:14.356 "thread": "nvmf_tgt_poll_group_000", 00:18:14.356 "listen_address": { 00:18:14.356 "trtype": "TCP", 00:18:14.356 "adrfam": "IPv4", 00:18:14.356 "traddr": "10.0.0.2", 00:18:14.356 "trsvcid": "4420" 00:18:14.356 }, 00:18:14.356 "peer_address": { 00:18:14.356 "trtype": "TCP", 00:18:14.356 "adrfam": "IPv4", 00:18:14.356 "traddr": "10.0.0.1", 00:18:14.356 "trsvcid": "49224" 00:18:14.356 }, 00:18:14.356 "auth": { 00:18:14.356 "state": "completed", 00:18:14.356 "digest": "sha512", 00:18:14.356 "dhgroup": "ffdhe3072" 00:18:14.356 } 00:18:14.356 } 00:18:14.356 ]' 00:18:14.356 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.615 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.615 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.615 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:14.615 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.615 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.615 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.615 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.875 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.443 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.702 00:18:15.702 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.702 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.702 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.962 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.962 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.962 19:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.962 19:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.962 19:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.962 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.962 { 00:18:15.962 "cntlid": 119, 00:18:15.962 "qid": 0, 00:18:15.962 "state": "enabled", 00:18:15.962 "thread": "nvmf_tgt_poll_group_000", 00:18:15.962 "listen_address": { 00:18:15.962 "trtype": "TCP", 00:18:15.962 "adrfam": "IPv4", 00:18:15.962 "traddr": "10.0.0.2", 00:18:15.962 "trsvcid": "4420" 00:18:15.962 }, 00:18:15.962 "peer_address": { 00:18:15.962 "trtype": "TCP", 00:18:15.962 "adrfam": "IPv4", 00:18:15.962 "traddr": "10.0.0.1", 00:18:15.962 "trsvcid": "58452" 00:18:15.962 }, 00:18:15.962 "auth": { 00:18:15.962 "state": "completed", 00:18:15.962 "digest": "sha512", 00:18:15.962 "dhgroup": "ffdhe3072" 00:18:15.962 } 00:18:15.962 } 00:18:15.962 ]' 00:18:15.962 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.962 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.962 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.962 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:15.962 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.221 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.221 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.221 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.221 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:18:16.790 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.790 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.790 19:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.790 19:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.790 19:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.790 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.790 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.790 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.790 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:17.049 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:17.049 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.049 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.049 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:17.049 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:17.049 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.049 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.049 19:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.049 19:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.049 19:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.049 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.049 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.308 00:18:17.308 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.308 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.308 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.568 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.568 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.568 19:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.568 19:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.568 19:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.568 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.568 { 00:18:17.568 "cntlid": 121, 00:18:17.568 "qid": 0, 00:18:17.568 "state": "enabled", 00:18:17.568 "thread": "nvmf_tgt_poll_group_000", 00:18:17.568 "listen_address": { 00:18:17.568 "trtype": "TCP", 00:18:17.568 "adrfam": "IPv4", 00:18:17.568 "traddr": "10.0.0.2", 00:18:17.568 "trsvcid": "4420" 00:18:17.568 }, 00:18:17.568 "peer_address": { 00:18:17.568 "trtype": "TCP", 00:18:17.568 "adrfam": "IPv4", 00:18:17.568 "traddr": "10.0.0.1", 00:18:17.568 "trsvcid": "58482" 00:18:17.568 }, 00:18:17.568 "auth": { 00:18:17.568 "state": "completed", 00:18:17.568 "digest": "sha512", 00:18:17.568 "dhgroup": "ffdhe4096" 00:18:17.568 } 00:18:17.568 } 00:18:17.568 ]' 00:18:17.568 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.568 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.568 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.568 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.568 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.568 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.568 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.568 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.827 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:18:18.395 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.395 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.395 19:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.395 19:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.395 19:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.395 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.395 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.395 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.654 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:18.654 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.654 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:18.654 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:18.654 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:18.654 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.654 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.654 19:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.654 19:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.654 19:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.654 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.654 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.912 00:18:18.912 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.912 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.912 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.171 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.171 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.171 19:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.171 19:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.171 19:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.171 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.171 { 00:18:19.171 "cntlid": 123, 00:18:19.171 "qid": 0, 00:18:19.171 "state": "enabled", 00:18:19.171 "thread": "nvmf_tgt_poll_group_000", 00:18:19.171 "listen_address": { 00:18:19.171 "trtype": "TCP", 00:18:19.171 "adrfam": "IPv4", 00:18:19.171 "traddr": "10.0.0.2", 00:18:19.171 "trsvcid": "4420" 00:18:19.171 }, 00:18:19.171 "peer_address": { 00:18:19.171 "trtype": "TCP", 00:18:19.171 "adrfam": "IPv4", 00:18:19.171 "traddr": "10.0.0.1", 00:18:19.171 "trsvcid": "58500" 00:18:19.171 }, 00:18:19.171 "auth": { 00:18:19.171 "state": "completed", 00:18:19.171 "digest": "sha512", 00:18:19.171 "dhgroup": "ffdhe4096" 00:18:19.171 } 00:18:19.171 } 00:18:19.171 ]' 00:18:19.171 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.171 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.171 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.171 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.171 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.171 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.171 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.171 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.430 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.999 19:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.259 19:10:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.259 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.259 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.519 00:18:20.519 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.519 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.519 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.519 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.519 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.519 19:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.519 19:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.519 19:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.519 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.519 { 00:18:20.519 "cntlid": 125, 00:18:20.519 "qid": 0, 00:18:20.519 "state": "enabled", 00:18:20.519 "thread": "nvmf_tgt_poll_group_000", 00:18:20.519 "listen_address": { 00:18:20.519 "trtype": "TCP", 00:18:20.519 "adrfam": "IPv4", 00:18:20.519 "traddr": "10.0.0.2", 00:18:20.519 "trsvcid": "4420" 00:18:20.519 }, 00:18:20.519 "peer_address": { 00:18:20.519 "trtype": "TCP", 00:18:20.519 "adrfam": "IPv4", 00:18:20.519 "traddr": "10.0.0.1", 00:18:20.519 "trsvcid": "58528" 00:18:20.519 }, 00:18:20.519 "auth": { 00:18:20.519 "state": "completed", 00:18:20.519 "digest": "sha512", 00:18:20.519 "dhgroup": "ffdhe4096" 00:18:20.519 } 00:18:20.519 } 00:18:20.519 ]' 00:18:20.519 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.777 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.777 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.777 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.777 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.777 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.777 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.777 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.036 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:18:21.606 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.606 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:21.606 19:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.606 19:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.606 19:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.606 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.606 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:21.606 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:21.606 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:21.606 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.606 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.606 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:21.606 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:21.606 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.606 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:21.606 19:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.606 19:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.606 19:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.606 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.606 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.865 00:18:21.865 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.865 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.865 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.125 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.125 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.125 19:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.125 19:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.125 19:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.125 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.125 { 00:18:22.125 "cntlid": 127, 00:18:22.125 "qid": 0, 00:18:22.125 "state": "enabled", 00:18:22.125 "thread": "nvmf_tgt_poll_group_000", 00:18:22.125 "listen_address": { 00:18:22.125 "trtype": "TCP", 00:18:22.125 "adrfam": "IPv4", 00:18:22.125 "traddr": "10.0.0.2", 00:18:22.125 "trsvcid": "4420" 00:18:22.125 }, 00:18:22.125 "peer_address": { 00:18:22.125 "trtype": "TCP", 00:18:22.125 "adrfam": "IPv4", 00:18:22.125 "traddr": "10.0.0.1", 00:18:22.125 "trsvcid": "58552" 00:18:22.125 }, 00:18:22.125 "auth": { 00:18:22.125 "state": "completed", 00:18:22.125 "digest": "sha512", 00:18:22.125 "dhgroup": "ffdhe4096" 00:18:22.125 } 00:18:22.125 } 00:18:22.125 ]' 00:18:22.125 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.125 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.125 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.125 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:22.125 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.384 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.384 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.384 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.384 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:18:22.953 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.953 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.953 19:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.953 19:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.953 19:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.953 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.953 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.953 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:22.953 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:23.213 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:23.213 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.213 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.213 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:23.213 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:23.213 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.213 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.213 19:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.214 19:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.214 19:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.214 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.214 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.473 00:18:23.733 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.733 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.733 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.733 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.733 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.733 19:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.733 19:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.733 19:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.733 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.733 { 00:18:23.733 "cntlid": 129, 00:18:23.733 "qid": 0, 00:18:23.733 "state": "enabled", 00:18:23.733 "thread": "nvmf_tgt_poll_group_000", 00:18:23.733 "listen_address": { 00:18:23.733 "trtype": "TCP", 00:18:23.733 "adrfam": "IPv4", 00:18:23.733 "traddr": "10.0.0.2", 00:18:23.733 "trsvcid": "4420" 00:18:23.733 }, 00:18:23.733 "peer_address": { 00:18:23.733 "trtype": "TCP", 00:18:23.733 "adrfam": "IPv4", 00:18:23.733 "traddr": "10.0.0.1", 00:18:23.733 "trsvcid": "58568" 00:18:23.733 }, 00:18:23.733 "auth": { 00:18:23.733 "state": "completed", 00:18:23.733 "digest": "sha512", 00:18:23.733 "dhgroup": "ffdhe6144" 00:18:23.733 } 00:18:23.733 } 00:18:23.733 ]' 00:18:23.733 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.733 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.734 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.993 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.993 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.993 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.993 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.993 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.252 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.822 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.392 00:18:25.392 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.392 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.392 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.392 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.392 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.392 19:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.392 19:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.392 19:10:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.392 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.392 { 00:18:25.392 "cntlid": 131, 00:18:25.392 "qid": 0, 00:18:25.392 "state": "enabled", 00:18:25.392 "thread": "nvmf_tgt_poll_group_000", 00:18:25.392 "listen_address": { 00:18:25.392 "trtype": "TCP", 00:18:25.392 "adrfam": "IPv4", 00:18:25.392 "traddr": "10.0.0.2", 00:18:25.392 "trsvcid": "4420" 00:18:25.392 }, 00:18:25.392 "peer_address": { 00:18:25.392 "trtype": "TCP", 00:18:25.392 "adrfam": "IPv4", 00:18:25.392 "traddr": "10.0.0.1", 00:18:25.392 "trsvcid": "51338" 00:18:25.392 }, 00:18:25.392 "auth": { 00:18:25.392 "state": "completed", 00:18:25.392 "digest": "sha512", 00:18:25.392 "dhgroup": "ffdhe6144" 00:18:25.392 } 00:18:25.392 } 00:18:25.392 ]' 00:18:25.392 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.392 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.392 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.652 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.652 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.652 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.652 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.652 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.652 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:18:26.221 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.221 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:26.221 19:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.221 19:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.221 19:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.221 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.221 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:26.221 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:26.480 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:26.480 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.480 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:26.480 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:26.480 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:26.480 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.480 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.480 19:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.480 19:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.481 19:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.481 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.481 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.740 00:18:26.740 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.740 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.740 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.999 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.999 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.999 19:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.999 19:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.999 19:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.999 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.999 { 00:18:26.999 "cntlid": 133, 00:18:26.999 "qid": 0, 00:18:26.999 "state": "enabled", 00:18:26.999 "thread": "nvmf_tgt_poll_group_000", 00:18:26.999 "listen_address": { 00:18:26.999 "trtype": "TCP", 00:18:26.999 "adrfam": "IPv4", 00:18:26.999 "traddr": "10.0.0.2", 00:18:27.000 "trsvcid": "4420" 00:18:27.000 }, 00:18:27.000 "peer_address": { 00:18:27.000 "trtype": "TCP", 00:18:27.000 "adrfam": "IPv4", 00:18:27.000 "traddr": "10.0.0.1", 00:18:27.000 "trsvcid": "51364" 00:18:27.000 }, 00:18:27.000 "auth": { 00:18:27.000 "state": "completed", 00:18:27.000 "digest": "sha512", 00:18:27.000 "dhgroup": "ffdhe6144" 00:18:27.000 } 00:18:27.000 } 00:18:27.000 ]' 00:18:27.000 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.000 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.000 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.000 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:27.000 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.259 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.259 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.259 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.259 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:18:27.829 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.829 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:27.829 19:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.829 19:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.829 19:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.829 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.829 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:27.829 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:28.088 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:28.088 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.088 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:28.088 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:28.088 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:28.088 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.088 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:28.088 19:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.088 19:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.088 19:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.088 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.088 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.348 00:18:28.348 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.348 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.348 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.608 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.608 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.608 19:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.608 19:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.608 19:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.608 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.608 { 00:18:28.608 "cntlid": 135, 00:18:28.608 "qid": 0, 00:18:28.608 "state": "enabled", 00:18:28.608 "thread": "nvmf_tgt_poll_group_000", 00:18:28.608 "listen_address": { 00:18:28.608 "trtype": "TCP", 00:18:28.608 "adrfam": "IPv4", 00:18:28.608 "traddr": "10.0.0.2", 00:18:28.608 "trsvcid": "4420" 00:18:28.608 }, 00:18:28.608 "peer_address": { 00:18:28.608 "trtype": "TCP", 00:18:28.608 "adrfam": "IPv4", 00:18:28.608 "traddr": "10.0.0.1", 00:18:28.608 "trsvcid": "51378" 00:18:28.608 }, 00:18:28.608 "auth": { 00:18:28.608 "state": "completed", 00:18:28.608 "digest": "sha512", 00:18:28.608 "dhgroup": "ffdhe6144" 00:18:28.608 } 00:18:28.608 } 00:18:28.608 ]' 00:18:28.608 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.608 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.608 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.867 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:28.867 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.867 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.867 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.867 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.867 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:18:29.437 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.437 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:29.437 19:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.437 19:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.437 19:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.437 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.437 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.437 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.437 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.696 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:29.696 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.696 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:29.696 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:29.696 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:29.696 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.696 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.696 19:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.696 19:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.696 19:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.696 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.696 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.264 00:18:30.264 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.264 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.264 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.264 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.264 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.264 19:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.264 19:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.264 19:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.264 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.264 { 00:18:30.264 "cntlid": 137, 00:18:30.264 "qid": 0, 00:18:30.264 "state": "enabled", 00:18:30.264 "thread": "nvmf_tgt_poll_group_000", 00:18:30.264 "listen_address": { 00:18:30.264 "trtype": "TCP", 00:18:30.264 "adrfam": "IPv4", 00:18:30.264 "traddr": "10.0.0.2", 00:18:30.264 "trsvcid": "4420" 00:18:30.264 }, 00:18:30.264 "peer_address": { 00:18:30.264 "trtype": "TCP", 00:18:30.264 "adrfam": "IPv4", 00:18:30.264 "traddr": "10.0.0.1", 00:18:30.264 "trsvcid": "51402" 00:18:30.264 }, 00:18:30.264 "auth": { 00:18:30.264 "state": "completed", 00:18:30.264 "digest": "sha512", 00:18:30.264 "dhgroup": "ffdhe8192" 00:18:30.264 } 00:18:30.264 } 00:18:30.264 ]' 00:18:30.264 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.523 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.523 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.523 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.523 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.523 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.523 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.523 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.781 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:18:31.347 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.347 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:31.347 19:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.347 19:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.347 19:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.347 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.347 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:31.347 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:31.347 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:31.348 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.348 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:31.348 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:31.348 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:31.348 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.348 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.348 19:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.348 19:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.606 19:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.606 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.606 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.864 00:18:31.864 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.864 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.864 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.157 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.157 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.157 19:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.157 19:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.157 19:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.157 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.157 { 00:18:32.157 "cntlid": 139, 00:18:32.157 "qid": 0, 00:18:32.157 "state": "enabled", 00:18:32.157 "thread": "nvmf_tgt_poll_group_000", 00:18:32.157 "listen_address": { 00:18:32.157 "trtype": "TCP", 00:18:32.157 "adrfam": "IPv4", 00:18:32.157 "traddr": "10.0.0.2", 00:18:32.157 "trsvcid": "4420" 00:18:32.157 }, 00:18:32.157 "peer_address": { 00:18:32.157 "trtype": "TCP", 00:18:32.157 "adrfam": "IPv4", 00:18:32.157 "traddr": "10.0.0.1", 00:18:32.157 "trsvcid": "51428" 00:18:32.157 }, 00:18:32.157 "auth": { 00:18:32.157 "state": "completed", 00:18:32.157 "digest": "sha512", 00:18:32.157 "dhgroup": "ffdhe8192" 00:18:32.157 } 00:18:32.157 } 00:18:32.157 ]' 00:18:32.157 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.157 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.157 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.157 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.157 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.157 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.157 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.157 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.416 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTE4ZTgxYTAwNjA2ZDYwZjhlODRlZDcxZDg2OTM1MGbElLHa: --dhchap-ctrl-secret DHHC-1:02:YWRmYTk2ZjQxZWE5ZmI4ZGMwMzJiNjUzNTgzMmMxNWUzNzc1YTk3MDcyYWFkNDkzfAKFrQ==: 00:18:32.984 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.984 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:32.984 19:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.984 19:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.984 19:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.984 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.984 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.984 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:33.243 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:33.244 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.244 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:33.244 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:33.244 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:33.244 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.244 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.244 19:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.244 19:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.244 19:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.244 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.244 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.813 00:18:33.813 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.813 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.813 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.813 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.813 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.813 19:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.813 19:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.813 19:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.813 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.813 { 00:18:33.813 "cntlid": 141, 00:18:33.813 "qid": 0, 00:18:33.813 "state": "enabled", 00:18:33.813 "thread": "nvmf_tgt_poll_group_000", 00:18:33.813 "listen_address": { 00:18:33.813 "trtype": "TCP", 00:18:33.813 "adrfam": "IPv4", 00:18:33.813 "traddr": "10.0.0.2", 00:18:33.813 "trsvcid": "4420" 00:18:33.813 }, 00:18:33.813 "peer_address": { 00:18:33.813 "trtype": "TCP", 00:18:33.813 "adrfam": "IPv4", 00:18:33.813 "traddr": "10.0.0.1", 00:18:33.813 "trsvcid": "51444" 00:18:33.813 }, 00:18:33.813 "auth": { 00:18:33.813 "state": "completed", 00:18:33.813 "digest": "sha512", 00:18:33.813 "dhgroup": "ffdhe8192" 00:18:33.813 } 00:18:33.813 } 00:18:33.813 ]' 00:18:33.813 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.072 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.072 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.072 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.072 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.072 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.072 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.072 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.331 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:YTJkM2M1ODA2MWU4YzhmODA3NDhkZjk0NGI1NTEzOWJkODJlMjA0MjRlZDI2MTVhGHmCzg==: --dhchap-ctrl-secret DHHC-1:01:ODMyYmI5YjdjYWE1ZTk5MmFiNDY1NDI5ZTY3YjkzNjEcUy4f: 00:18:34.899 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.899 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:34.899 19:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.899 19:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.899 19:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.900 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.468 00:18:35.468 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.468 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.468 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.728 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.728 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.728 19:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.728 19:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.728 19:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.728 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.728 { 00:18:35.728 "cntlid": 143, 00:18:35.728 "qid": 0, 00:18:35.728 "state": "enabled", 00:18:35.728 "thread": "nvmf_tgt_poll_group_000", 00:18:35.728 "listen_address": { 00:18:35.728 "trtype": "TCP", 00:18:35.728 "adrfam": "IPv4", 00:18:35.728 "traddr": "10.0.0.2", 00:18:35.728 "trsvcid": "4420" 00:18:35.728 }, 00:18:35.728 "peer_address": { 00:18:35.728 "trtype": "TCP", 00:18:35.728 "adrfam": "IPv4", 00:18:35.728 "traddr": "10.0.0.1", 00:18:35.728 "trsvcid": "36892" 00:18:35.728 }, 00:18:35.728 "auth": { 00:18:35.728 "state": "completed", 00:18:35.728 "digest": "sha512", 00:18:35.728 "dhgroup": "ffdhe8192" 00:18:35.728 } 00:18:35.728 } 00:18:35.728 ]' 00:18:35.728 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.728 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.728 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.728 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.728 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.728 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.728 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.728 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.987 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:18:36.555 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.556 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:36.556 19:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.556 19:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.556 19:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.556 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:36.556 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:36.556 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:36.556 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.556 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.556 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.815 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:36.815 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.815 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:36.815 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:36.815 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:36.815 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.815 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.815 19:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.815 19:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.815 19:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.815 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.815 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.385 00:18:37.385 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.385 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.385 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.385 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.385 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.385 19:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.385 19:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.385 19:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.385 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.385 { 00:18:37.385 "cntlid": 145, 00:18:37.385 "qid": 0, 00:18:37.385 "state": "enabled", 00:18:37.385 "thread": "nvmf_tgt_poll_group_000", 00:18:37.385 "listen_address": { 00:18:37.385 "trtype": "TCP", 00:18:37.385 "adrfam": "IPv4", 00:18:37.385 "traddr": "10.0.0.2", 00:18:37.385 "trsvcid": "4420" 00:18:37.385 }, 00:18:37.385 "peer_address": { 00:18:37.385 "trtype": "TCP", 00:18:37.385 "adrfam": "IPv4", 00:18:37.385 "traddr": "10.0.0.1", 00:18:37.385 "trsvcid": "36914" 00:18:37.385 }, 00:18:37.385 "auth": { 00:18:37.385 "state": "completed", 00:18:37.385 "digest": "sha512", 00:18:37.385 "dhgroup": "ffdhe8192" 00:18:37.385 } 00:18:37.385 } 00:18:37.385 ]' 00:18:37.385 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.385 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.385 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.385 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:37.385 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.644 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.644 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.644 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.644 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZjE2YjY0MjI1MzM1NTc2NmViYTI1NWFkNmZiNmIzYzBjMjAwNDVkMzlhMWM5YTAzqtpPPA==: --dhchap-ctrl-secret DHHC-1:03:MGU2ZjgyODA2MzU4Y2E1NzM1MDVhNTE1ZWU1YmYwYTZmNWM0MzEwMjg5YzFmYzdmYTVmMDIyYjVkYzNmNDNmN0NMJ0A=: 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:38.213 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:38.781 request: 00:18:38.781 { 00:18:38.781 "name": "nvme0", 00:18:38.781 "trtype": "tcp", 00:18:38.781 "traddr": "10.0.0.2", 00:18:38.781 "adrfam": "ipv4", 00:18:38.781 "trsvcid": "4420", 00:18:38.781 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:38.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:38.782 "prchk_reftag": false, 00:18:38.782 "prchk_guard": false, 00:18:38.782 "hdgst": false, 00:18:38.782 "ddgst": false, 00:18:38.782 "dhchap_key": "key2", 00:18:38.782 "method": "bdev_nvme_attach_controller", 00:18:38.782 "req_id": 1 00:18:38.782 } 00:18:38.782 Got JSON-RPC error response 00:18:38.782 response: 00:18:38.782 { 00:18:38.782 "code": -5, 00:18:38.782 "message": "Input/output error" 00:18:38.782 } 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:38.782 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:39.351 request: 00:18:39.351 { 00:18:39.351 "name": "nvme0", 00:18:39.351 "trtype": "tcp", 00:18:39.351 "traddr": "10.0.0.2", 00:18:39.351 "adrfam": "ipv4", 00:18:39.351 "trsvcid": "4420", 00:18:39.351 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:39.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:39.351 "prchk_reftag": false, 00:18:39.351 "prchk_guard": false, 00:18:39.351 "hdgst": false, 00:18:39.351 "ddgst": false, 00:18:39.351 "dhchap_key": "key1", 00:18:39.351 "dhchap_ctrlr_key": "ckey2", 00:18:39.351 "method": "bdev_nvme_attach_controller", 00:18:39.351 "req_id": 1 00:18:39.351 } 00:18:39.351 Got JSON-RPC error response 00:18:39.351 response: 00:18:39.351 { 00:18:39.351 "code": -5, 00:18:39.351 "message": "Input/output error" 00:18:39.351 } 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.351 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.610 request: 00:18:39.610 { 00:18:39.610 "name": "nvme0", 00:18:39.610 "trtype": "tcp", 00:18:39.610 "traddr": "10.0.0.2", 00:18:39.610 "adrfam": "ipv4", 00:18:39.610 "trsvcid": "4420", 00:18:39.610 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:39.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:39.610 "prchk_reftag": false, 00:18:39.610 "prchk_guard": false, 00:18:39.610 "hdgst": false, 00:18:39.610 "ddgst": false, 00:18:39.610 "dhchap_key": "key1", 00:18:39.610 "dhchap_ctrlr_key": "ckey1", 00:18:39.610 "method": "bdev_nvme_attach_controller", 00:18:39.610 "req_id": 1 00:18:39.610 } 00:18:39.610 Got JSON-RPC error response 00:18:39.610 response: 00:18:39.610 { 00:18:39.610 "code": -5, 00:18:39.610 "message": "Input/output error" 00:18:39.610 } 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 299549 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 299549 ']' 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 299549 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 299549 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 299549' 00:18:39.610 killing process with pid 299549 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 299549 00:18:39.610 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 299549 00:18:39.870 19:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:39.870 19:10:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.870 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:39.870 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.870 19:10:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=321256 00:18:39.870 19:10:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 321256 00:18:39.870 19:10:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:39.870 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 321256 ']' 00:18:39.870 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.870 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.870 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.870 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.870 19:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.806 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.807 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:40.807 19:10:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:40.807 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:40.807 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.807 19:10:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.807 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:40.807 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 321256 00:18:40.807 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 321256 ']' 00:18:40.807 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.807 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.807 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.807 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.807 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.066 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.636 00:18:41.636 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.636 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.636 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.896 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.896 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.896 19:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.896 19:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.896 19:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.896 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.896 { 00:18:41.896 "cntlid": 1, 00:18:41.896 "qid": 0, 00:18:41.896 "state": "enabled", 00:18:41.896 "thread": "nvmf_tgt_poll_group_000", 00:18:41.896 "listen_address": { 00:18:41.896 "trtype": "TCP", 00:18:41.896 "adrfam": "IPv4", 00:18:41.896 "traddr": "10.0.0.2", 00:18:41.896 "trsvcid": "4420" 00:18:41.896 }, 00:18:41.896 "peer_address": { 00:18:41.896 "trtype": "TCP", 00:18:41.896 "adrfam": "IPv4", 00:18:41.896 "traddr": "10.0.0.1", 00:18:41.896 "trsvcid": "36970" 00:18:41.896 }, 00:18:41.896 "auth": { 00:18:41.896 "state": "completed", 00:18:41.896 "digest": "sha512", 00:18:41.896 "dhgroup": "ffdhe8192" 00:18:41.896 } 00:18:41.896 } 00:18:41.896 ]' 00:18:41.896 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.896 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.896 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.896 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:41.896 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.896 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.896 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.896 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.156 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NjJiOTc1OGM2Y2RkZTBhMDBmOTJiNDgyZmIxZGU3MjkzNzYzYTBkNDg5MmNjNzczODBlNWU5OWE5MjU2NTZjOC4c3PE=: 00:18:42.724 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.724 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:42.724 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.724 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.724 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.724 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:42.724 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.724 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.724 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.724 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:42.724 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.984 request: 00:18:42.984 { 00:18:42.984 "name": "nvme0", 00:18:42.984 "trtype": "tcp", 00:18:42.984 "traddr": "10.0.0.2", 00:18:42.984 "adrfam": "ipv4", 00:18:42.984 "trsvcid": "4420", 00:18:42.984 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:42.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:42.984 "prchk_reftag": false, 00:18:42.984 "prchk_guard": false, 00:18:42.984 "hdgst": false, 00:18:42.984 "ddgst": false, 00:18:42.984 "dhchap_key": "key3", 00:18:42.984 "method": "bdev_nvme_attach_controller", 00:18:42.984 "req_id": 1 00:18:42.984 } 00:18:42.984 Got JSON-RPC error response 00:18:42.984 response: 00:18:42.984 { 00:18:42.984 "code": -5, 00:18:42.984 "message": "Input/output error" 00:18:42.984 } 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:42.984 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:43.244 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.244 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:43.244 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.244 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:43.244 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:43.244 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:43.244 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:43.244 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.244 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.504 request: 00:18:43.504 { 00:18:43.504 "name": "nvme0", 00:18:43.504 "trtype": "tcp", 00:18:43.504 "traddr": "10.0.0.2", 00:18:43.504 "adrfam": "ipv4", 00:18:43.504 "trsvcid": "4420", 00:18:43.504 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:43.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:43.504 "prchk_reftag": false, 00:18:43.504 "prchk_guard": false, 00:18:43.504 "hdgst": false, 00:18:43.504 "ddgst": false, 00:18:43.504 "dhchap_key": "key3", 00:18:43.504 "method": "bdev_nvme_attach_controller", 00:18:43.504 "req_id": 1 00:18:43.504 } 00:18:43.504 Got JSON-RPC error response 00:18:43.504 response: 00:18:43.504 { 00:18:43.504 "code": -5, 00:18:43.504 "message": "Input/output error" 00:18:43.504 } 00:18:43.504 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:43.504 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:43.504 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:43.504 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:43.504 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:43.504 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:43.504 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:43.504 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:43.504 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:43.504 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:43.762 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:43.762 request: 00:18:43.762 { 00:18:43.762 "name": "nvme0", 00:18:43.762 "trtype": "tcp", 00:18:43.762 "traddr": "10.0.0.2", 00:18:43.762 "adrfam": "ipv4", 00:18:43.762 "trsvcid": "4420", 00:18:43.762 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:43.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:43.762 "prchk_reftag": false, 00:18:43.762 "prchk_guard": false, 00:18:43.762 "hdgst": false, 00:18:43.762 "ddgst": false, 00:18:43.762 "dhchap_key": "key0", 00:18:43.762 "dhchap_ctrlr_key": "key1", 00:18:43.763 "method": "bdev_nvme_attach_controller", 00:18:43.763 "req_id": 1 00:18:43.763 } 00:18:43.763 Got JSON-RPC error response 00:18:43.763 response: 00:18:43.763 { 00:18:43.763 "code": -5, 00:18:43.763 "message": "Input/output error" 00:18:43.763 } 00:18:43.763 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:43.763 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:43.763 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:43.763 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:43.763 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:43.763 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:44.022 00:18:44.022 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:44.022 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:44.022 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.281 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.281 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.281 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.540 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:44.540 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:44.540 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 299649 00:18:44.540 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 299649 ']' 00:18:44.540 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 299649 00:18:44.540 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:44.540 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.540 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 299649 00:18:44.540 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:44.540 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:44.540 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 299649' 00:18:44.540 killing process with pid 299649 00:18:44.540 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 299649 00:18:44.540 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 299649 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:44.797 rmmod nvme_tcp 00:18:44.797 rmmod nvme_fabrics 00:18:44.797 rmmod nvme_keyring 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 321256 ']' 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 321256 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 321256 ']' 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 321256 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.797 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 321256 00:18:45.054 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:45.054 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:45.054 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 321256' 00:18:45.054 killing process with pid 321256 00:18:45.054 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 321256 00:18:45.054 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 321256 00:18:45.054 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:45.054 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:45.054 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:45.054 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.054 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.054 19:10:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.055 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.055 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.591 19:10:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:47.591 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DW4 /tmp/spdk.key-sha256.r5n /tmp/spdk.key-sha384.53K /tmp/spdk.key-sha512.l4w /tmp/spdk.key-sha512.x61 /tmp/spdk.key-sha384.aAb /tmp/spdk.key-sha256.759 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:47.591 00:18:47.591 real 2m15.943s 00:18:47.591 user 5m11.218s 00:18:47.591 sys 0m20.972s 00:18:47.591 19:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:47.591 19:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.591 ************************************ 00:18:47.591 END TEST nvmf_auth_target 00:18:47.591 ************************************ 00:18:47.591 19:10:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:47.591 19:10:49 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:47.591 19:10:49 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:47.591 19:10:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:47.591 19:10:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.591 19:10:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:47.591 ************************************ 00:18:47.591 START TEST nvmf_bdevio_no_huge 00:18:47.591 ************************************ 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:47.591 * Looking for test storage... 00:18:47.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:47.591 19:10:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:52.868 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:52.868 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:52.868 Found net devices under 0000:86:00.0: cvl_0_0 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:52.868 Found net devices under 0000:86:00.1: cvl_0_1 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:52.868 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.869 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.869 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:52.869 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:52.869 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.869 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.869 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.869 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.869 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:52.869 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:53.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:18:53.129 00:18:53.129 --- 10.0.0.2 ping statistics --- 00:18:53.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.129 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:18:53.129 00:18:53.129 --- 10.0.0.1 ping statistics --- 00:18:53.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.129 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=325521 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 325521 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 325521 ']' 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.129 19:10:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.129 [2024-07-12 19:10:55.580988] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:18:53.129 [2024-07-12 19:10:55.581034] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:53.129 [2024-07-12 19:10:55.659495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:53.389 [2024-07-12 19:10:55.745313] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.389 [2024-07-12 19:10:55.745346] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.389 [2024-07-12 19:10:55.745355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.389 [2024-07-12 19:10:55.745361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.389 [2024-07-12 19:10:55.745367] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.389 [2024-07-12 19:10:55.745481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:53.389 [2024-07-12 19:10:55.745590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:53.389 [2024-07-12 19:10:55.745707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.389 [2024-07-12 19:10:55.745708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.958 [2024-07-12 19:10:56.437095] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.958 Malloc0 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.958 [2024-07-12 19:10:56.477328] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:53.958 { 00:18:53.958 "params": { 00:18:53.958 "name": "Nvme$subsystem", 00:18:53.958 "trtype": "$TEST_TRANSPORT", 00:18:53.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.958 "adrfam": "ipv4", 00:18:53.958 "trsvcid": "$NVMF_PORT", 00:18:53.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.958 "hdgst": ${hdgst:-false}, 00:18:53.958 "ddgst": ${ddgst:-false} 00:18:53.958 }, 00:18:53.958 "method": "bdev_nvme_attach_controller" 00:18:53.958 } 00:18:53.958 EOF 00:18:53.958 )") 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:53.958 19:10:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:53.958 "params": { 00:18:53.958 "name": "Nvme1", 00:18:53.958 "trtype": "tcp", 00:18:53.958 "traddr": "10.0.0.2", 00:18:53.958 "adrfam": "ipv4", 00:18:53.958 "trsvcid": "4420", 00:18:53.958 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.958 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.958 "hdgst": false, 00:18:53.958 "ddgst": false 00:18:53.958 }, 00:18:53.958 "method": "bdev_nvme_attach_controller" 00:18:53.958 }' 00:18:53.958 [2024-07-12 19:10:56.524372] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:18:53.958 [2024-07-12 19:10:56.524417] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid325765 ] 00:18:54.218 [2024-07-12 19:10:56.594184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:54.218 [2024-07-12 19:10:56.681639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.218 [2024-07-12 19:10:56.681747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.218 [2024-07-12 19:10:56.681747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.477 I/O targets: 00:18:54.477 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:54.477 00:18:54.477 00:18:54.477 CUnit - A unit testing framework for C - Version 2.1-3 00:18:54.477 http://cunit.sourceforge.net/ 00:18:54.477 00:18:54.477 00:18:54.477 Suite: bdevio tests on: Nvme1n1 00:18:54.477 Test: blockdev write read block ...passed 00:18:54.477 Test: blockdev write zeroes read block ...passed 00:18:54.477 Test: blockdev write zeroes read no split ...passed 00:18:54.477 Test: blockdev write zeroes read split ...passed 00:18:54.477 Test: blockdev write zeroes read split partial ...passed 00:18:54.477 Test: blockdev reset ...[2024-07-12 19:10:56.949770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:54.477 [2024-07-12 19:10:56.949828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc68300 (9): Bad file descriptor 00:18:54.477 [2024-07-12 19:10:57.009308] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:54.477 passed 00:18:54.736 Test: blockdev write read 8 blocks ...passed 00:18:54.736 Test: blockdev write read size > 128k ...passed 00:18:54.736 Test: blockdev write read invalid size ...passed 00:18:54.736 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:54.736 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:54.736 Test: blockdev write read max offset ...passed 00:18:54.736 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:54.736 Test: blockdev writev readv 8 blocks ...passed 00:18:54.736 Test: blockdev writev readv 30 x 1block ...passed 00:18:54.736 Test: blockdev writev readv block ...passed 00:18:54.736 Test: blockdev writev readv size > 128k ...passed 00:18:54.736 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:54.736 Test: blockdev comparev and writev ...[2024-07-12 19:10:57.264162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.736 [2024-07-12 19:10:57.264190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:54.736 [2024-07-12 19:10:57.264204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.736 [2024-07-12 19:10:57.264211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:54.736 [2024-07-12 19:10:57.264456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.736 [2024-07-12 19:10:57.264466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:54.736 [2024-07-12 19:10:57.264478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.736 [2024-07-12 19:10:57.264485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:54.736 [2024-07-12 19:10:57.264731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.736 [2024-07-12 19:10:57.264740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:54.736 [2024-07-12 19:10:57.264752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.736 [2024-07-12 19:10:57.264759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:54.736 [2024-07-12 19:10:57.264989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.736 [2024-07-12 19:10:57.264998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:54.736 [2024-07-12 19:10:57.265014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.736 [2024-07-12 19:10:57.265021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:54.995 passed 00:18:54.995 Test: blockdev nvme passthru rw ...passed 00:18:54.995 Test: blockdev nvme passthru vendor specific ...[2024-07-12 19:10:57.346492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:54.995 [2024-07-12 19:10:57.346511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:54.995 [2024-07-12 19:10:57.346620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:54.995 [2024-07-12 19:10:57.346630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:54.995 [2024-07-12 19:10:57.346731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:54.995 [2024-07-12 19:10:57.346740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:54.995 [2024-07-12 19:10:57.346850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:54.995 [2024-07-12 19:10:57.346860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:54.995 passed 00:18:54.995 Test: blockdev nvme admin passthru ...passed 00:18:54.995 Test: blockdev copy ...passed 00:18:54.995 00:18:54.995 Run Summary: Type Total Ran Passed Failed Inactive 00:18:54.995 suites 1 1 n/a 0 0 00:18:54.995 tests 23 23 23 0 0 00:18:54.995 asserts 152 152 152 0 n/a 00:18:54.995 00:18:54.995 Elapsed time = 1.170 seconds 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:55.255 rmmod nvme_tcp 00:18:55.255 rmmod nvme_fabrics 00:18:55.255 rmmod nvme_keyring 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 325521 ']' 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 325521 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 325521 ']' 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 325521 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 325521 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 325521' 00:18:55.255 killing process with pid 325521 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 325521 00:18:55.255 19:10:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 325521 00:18:55.823 19:10:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:55.823 19:10:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:55.823 19:10:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:55.823 19:10:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:55.823 19:10:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:55.823 19:10:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.823 19:10:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.823 19:10:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.730 19:11:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:57.730 00:18:57.730 real 0m10.435s 00:18:57.730 user 0m12.775s 00:18:57.730 sys 0m5.136s 00:18:57.730 19:11:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:57.730 19:11:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.730 ************************************ 00:18:57.730 END TEST nvmf_bdevio_no_huge 00:18:57.730 ************************************ 00:18:57.730 19:11:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:57.730 19:11:00 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:57.730 19:11:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:57.730 19:11:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:57.730 19:11:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:57.730 ************************************ 00:18:57.730 START TEST nvmf_tls 00:18:57.730 ************************************ 00:18:57.730 19:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:57.990 * Looking for test storage... 00:18:57.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:57.990 19:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:03.266 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:03.267 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:03.267 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:03.267 Found net devices under 0000:86:00.0: cvl_0_0 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:03.267 Found net devices under 0000:86:00.1: cvl_0_1 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:03.267 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:03.545 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:03.545 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:03.545 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:03.545 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:03.545 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:03.545 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:03.545 19:11:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:03.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:19:03.545 00:19:03.545 --- 10.0.0.2 ping statistics --- 00:19:03.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.545 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:03.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:19:03.545 00:19:03.545 --- 10.0.0.1 ping statistics --- 00:19:03.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.545 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=329454 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 329454 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 329454 ']' 00:19:03.545 19:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.546 19:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:03.546 19:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.546 19:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:03.546 19:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.546 [2024-07-12 19:11:06.099856] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:19:03.546 [2024-07-12 19:11:06.099898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.806 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.806 [2024-07-12 19:11:06.173053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.806 [2024-07-12 19:11:06.244435] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.806 [2024-07-12 19:11:06.244480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.806 [2024-07-12 19:11:06.244486] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.806 [2024-07-12 19:11:06.244492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.806 [2024-07-12 19:11:06.244497] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.806 [2024-07-12 19:11:06.244533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.374 19:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.374 19:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:04.374 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:04.374 19:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:04.374 19:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.634 19:11:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.634 19:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:04.634 19:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:04.634 true 00:19:04.634 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:04.634 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:04.893 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:04.893 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:04.893 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:04.893 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:05.152 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:05.152 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:05.152 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:05.152 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:05.412 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:05.412 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:05.672 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:05.672 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:05.672 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:05.672 19:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:05.672 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:05.672 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:05.672 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:05.931 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:05.931 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:05.931 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:05.931 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:05.931 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:06.190 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:06.190 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.itO0seXiIb 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.0624C3pMjF 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.itO0seXiIb 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.0624C3pMjF 00:19:06.450 19:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:06.709 19:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:06.968 19:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.itO0seXiIb 00:19:06.968 19:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.itO0seXiIb 00:19:06.968 19:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:06.968 [2024-07-12 19:11:09.505731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.968 19:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:07.227 19:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:07.486 [2024-07-12 19:11:09.838571] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:07.486 [2024-07-12 19:11:09.838783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.486 19:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:07.486 malloc0 00:19:07.746 19:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:07.746 19:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.itO0seXiIb 00:19:08.005 [2024-07-12 19:11:10.396427] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:08.005 19:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.itO0seXiIb 00:19:08.005 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.988 Initializing NVMe Controllers 00:19:17.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:17.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:17.988 Initialization complete. Launching workers. 00:19:17.988 ======================================================== 00:19:17.988 Latency(us) 00:19:17.988 Device Information : IOPS MiB/s Average min max 00:19:17.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16459.52 64.29 3888.74 830.87 6020.40 00:19:17.988 ======================================================== 00:19:17.988 Total : 16459.52 64.29 3888.74 830.87 6020.40 00:19:17.988 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.itO0seXiIb 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.itO0seXiIb' 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=331860 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 331860 /var/tmp/bdevperf.sock 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 331860 ']' 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.988 19:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.247 [2024-07-12 19:11:20.568869] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:19:18.247 [2024-07-12 19:11:20.568914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331860 ] 00:19:18.247 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.247 [2024-07-12 19:11:20.637053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.247 [2024-07-12 19:11:20.710532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.816 19:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.816 19:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:18.817 19:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.itO0seXiIb 00:19:19.076 [2024-07-12 19:11:21.541920] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:19.076 [2024-07-12 19:11:21.541993] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:19.076 TLSTESTn1 00:19:19.076 19:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:19.334 Running I/O for 10 seconds... 00:19:29.312 00:19:29.312 Latency(us) 00:19:29.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.312 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:29.312 Verification LBA range: start 0x0 length 0x2000 00:19:29.312 TLSTESTn1 : 10.01 5096.87 19.91 0.00 0.00 25077.87 5071.92 32597.04 00:19:29.312 =================================================================================================================== 00:19:29.312 Total : 5096.87 19.91 0.00 0.00 25077.87 5071.92 32597.04 00:19:29.312 0 00:19:29.312 19:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:29.312 19:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 331860 00:19:29.313 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 331860 ']' 00:19:29.313 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 331860 00:19:29.313 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:29.313 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.313 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 331860 00:19:29.313 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:29.313 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:29.313 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 331860' 00:19:29.313 killing process with pid 331860 00:19:29.313 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 331860 00:19:29.313 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.313 00:19:29.313 Latency(us) 00:19:29.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.313 =================================================================================================================== 00:19:29.313 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.313 [2024-07-12 19:11:31.800659] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:29.313 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 331860 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0624C3pMjF 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0624C3pMjF 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0624C3pMjF 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0624C3pMjF' 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=333696 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 333696 /var/tmp/bdevperf.sock 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 333696 ']' 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.573 19:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.573 [2024-07-12 19:11:32.029117] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:19:29.573 [2024-07-12 19:11:32.029168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333696 ] 00:19:29.573 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.573 [2024-07-12 19:11:32.090312] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.833 [2024-07-12 19:11:32.158681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.402 19:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.402 19:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:30.402 19:11:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0624C3pMjF 00:19:30.661 [2024-07-12 19:11:33.009574] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.661 [2024-07-12 19:11:33.009665] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:30.661 [2024-07-12 19:11:33.016266] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:30.661 [2024-07-12 19:11:33.016931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e6570 (107): Transport endpoint is not connected 00:19:30.661 [2024-07-12 19:11:33.017919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e6570 (9): Bad file descriptor 00:19:30.661 [2024-07-12 19:11:33.018920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:30.661 [2024-07-12 19:11:33.018932] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:30.661 [2024-07-12 19:11:33.018944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.661 request: 00:19:30.661 { 00:19:30.661 "name": "TLSTEST", 00:19:30.661 "trtype": "tcp", 00:19:30.661 "traddr": "10.0.0.2", 00:19:30.661 "adrfam": "ipv4", 00:19:30.661 "trsvcid": "4420", 00:19:30.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.661 "prchk_reftag": false, 00:19:30.661 "prchk_guard": false, 00:19:30.661 "hdgst": false, 00:19:30.661 "ddgst": false, 00:19:30.661 "psk": "/tmp/tmp.0624C3pMjF", 00:19:30.661 "method": "bdev_nvme_attach_controller", 00:19:30.661 "req_id": 1 00:19:30.661 } 00:19:30.661 Got JSON-RPC error response 00:19:30.661 response: 00:19:30.661 { 00:19:30.661 "code": -5, 00:19:30.661 "message": "Input/output error" 00:19:30.661 } 00:19:30.661 19:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 333696 00:19:30.661 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 333696 ']' 00:19:30.661 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 333696 00:19:30.661 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:30.661 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:30.661 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 333696 00:19:30.661 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:30.661 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:30.661 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 333696' 00:19:30.661 killing process with pid 333696 00:19:30.661 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 333696 00:19:30.661 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.661 00:19:30.661 Latency(us) 00:19:30.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.661 =================================================================================================================== 00:19:30.661 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:30.661 [2024-07-12 19:11:33.093425] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:30.661 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 333696 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.itO0seXiIb 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.itO0seXiIb 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.itO0seXiIb 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.itO0seXiIb' 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=333938 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 333938 /var/tmp/bdevperf.sock 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 333938 ']' 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.922 19:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.922 [2024-07-12 19:11:33.313286] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:19:30.922 [2024-07-12 19:11:33.313337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333938 ] 00:19:30.922 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.922 [2024-07-12 19:11:33.379771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.922 [2024-07-12 19:11:33.448967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.862 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.862 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:31.862 19:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.itO0seXiIb 00:19:31.862 [2024-07-12 19:11:34.283727] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.862 [2024-07-12 19:11:34.283813] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:31.862 [2024-07-12 19:11:34.289420] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:31.862 [2024-07-12 19:11:34.289445] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:31.862 [2024-07-12 19:11:34.289468] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:31.862 [2024-07-12 19:11:34.290034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1711570 (107): Transport endpoint is not connected 00:19:31.862 [2024-07-12 19:11:34.291023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1711570 (9): Bad file descriptor 00:19:31.862 [2024-07-12 19:11:34.292024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:31.862 [2024-07-12 19:11:34.292036] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:31.862 [2024-07-12 19:11:34.292048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:31.862 request: 00:19:31.862 { 00:19:31.862 "name": "TLSTEST", 00:19:31.862 "trtype": "tcp", 00:19:31.862 "traddr": "10.0.0.2", 00:19:31.862 "adrfam": "ipv4", 00:19:31.862 "trsvcid": "4420", 00:19:31.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.862 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:31.862 "prchk_reftag": false, 00:19:31.862 "prchk_guard": false, 00:19:31.862 "hdgst": false, 00:19:31.862 "ddgst": false, 00:19:31.862 "psk": "/tmp/tmp.itO0seXiIb", 00:19:31.862 "method": "bdev_nvme_attach_controller", 00:19:31.862 "req_id": 1 00:19:31.862 } 00:19:31.862 Got JSON-RPC error response 00:19:31.862 response: 00:19:31.862 { 00:19:31.862 "code": -5, 00:19:31.862 "message": "Input/output error" 00:19:31.862 } 00:19:31.862 19:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 333938 00:19:31.862 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 333938 ']' 00:19:31.862 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 333938 00:19:31.862 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:31.862 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:31.862 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 333938 00:19:31.862 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:31.862 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:31.862 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 333938' 00:19:31.862 killing process with pid 333938 00:19:31.862 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 333938 00:19:31.862 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.862 00:19:31.862 Latency(us) 00:19:31.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.862 =================================================================================================================== 00:19:31.862 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:31.862 [2024-07-12 19:11:34.367798] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:31.862 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 333938 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.itO0seXiIb 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.itO0seXiIb 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.itO0seXiIb 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.itO0seXiIb' 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=334176 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 334176 /var/tmp/bdevperf.sock 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 334176 ']' 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.122 19:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.122 [2024-07-12 19:11:34.587558] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:19:32.122 [2024-07-12 19:11:34.587605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334176 ] 00:19:32.122 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.122 [2024-07-12 19:11:34.654106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.383 [2024-07-12 19:11:34.722109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.952 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:32.952 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:32.952 19:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.itO0seXiIb 00:19:33.213 [2024-07-12 19:11:35.559659] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.213 [2024-07-12 19:11:35.559744] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:33.213 [2024-07-12 19:11:35.564120] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:33.213 [2024-07-12 19:11:35.564142] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:33.213 [2024-07-12 19:11:35.564165] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:33.213 [2024-07-12 19:11:35.564834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea8570 (107): Transport endpoint is not connected 00:19:33.213 [2024-07-12 19:11:35.565822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea8570 (9): Bad file descriptor 00:19:33.213 [2024-07-12 19:11:35.566822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:33.213 [2024-07-12 19:11:35.566833] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:33.213 [2024-07-12 19:11:35.566846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:33.213 request: 00:19:33.213 { 00:19:33.213 "name": "TLSTEST", 00:19:33.213 "trtype": "tcp", 00:19:33.213 "traddr": "10.0.0.2", 00:19:33.213 "adrfam": "ipv4", 00:19:33.213 "trsvcid": "4420", 00:19:33.213 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:33.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.213 "prchk_reftag": false, 00:19:33.213 "prchk_guard": false, 00:19:33.213 "hdgst": false, 00:19:33.213 "ddgst": false, 00:19:33.213 "psk": "/tmp/tmp.itO0seXiIb", 00:19:33.213 "method": "bdev_nvme_attach_controller", 00:19:33.213 "req_id": 1 00:19:33.213 } 00:19:33.213 Got JSON-RPC error response 00:19:33.213 response: 00:19:33.213 { 00:19:33.213 "code": -5, 00:19:33.213 "message": "Input/output error" 00:19:33.213 } 00:19:33.213 19:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 334176 00:19:33.213 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 334176 ']' 00:19:33.213 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 334176 00:19:33.213 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:33.213 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.213 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 334176 00:19:33.213 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:33.213 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:33.213 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 334176' 00:19:33.213 killing process with pid 334176 00:19:33.213 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 334176 00:19:33.213 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.213 00:19:33.213 Latency(us) 00:19:33.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.213 =================================================================================================================== 00:19:33.213 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.213 [2024-07-12 19:11:35.637066] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:33.213 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 334176 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=334408 00:19:33.473 19:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:33.474 19:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:33.474 19:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 334408 /var/tmp/bdevperf.sock 00:19:33.474 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 334408 ']' 00:19:33.474 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.474 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.474 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.474 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.474 19:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.474 [2024-07-12 19:11:35.858571] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:19:33.474 [2024-07-12 19:11:35.858617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334408 ] 00:19:33.474 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.474 [2024-07-12 19:11:35.923510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.474 [2024-07-12 19:11:35.993623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.412 19:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.412 19:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:34.412 19:11:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:34.412 [2024-07-12 19:11:36.815875] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:34.412 [2024-07-12 19:11:36.817835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6adaf0 (9): Bad file descriptor 00:19:34.412 [2024-07-12 19:11:36.818834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:34.412 [2024-07-12 19:11:36.818849] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:34.412 [2024-07-12 19:11:36.818861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:34.412 request: 00:19:34.412 { 00:19:34.412 "name": "TLSTEST", 00:19:34.412 "trtype": "tcp", 00:19:34.412 "traddr": "10.0.0.2", 00:19:34.412 "adrfam": "ipv4", 00:19:34.412 "trsvcid": "4420", 00:19:34.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.412 "prchk_reftag": false, 00:19:34.412 "prchk_guard": false, 00:19:34.412 "hdgst": false, 00:19:34.412 "ddgst": false, 00:19:34.412 "method": "bdev_nvme_attach_controller", 00:19:34.412 "req_id": 1 00:19:34.412 } 00:19:34.412 Got JSON-RPC error response 00:19:34.412 response: 00:19:34.412 { 00:19:34.412 "code": -5, 00:19:34.412 "message": "Input/output error" 00:19:34.412 } 00:19:34.412 19:11:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 334408 00:19:34.412 19:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 334408 ']' 00:19:34.412 19:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 334408 00:19:34.412 19:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:34.412 19:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:34.412 19:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 334408 00:19:34.413 19:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:34.413 19:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:34.413 19:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 334408' 00:19:34.413 killing process with pid 334408 00:19:34.413 19:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 334408 00:19:34.413 Received shutdown signal, test time was about 10.000000 seconds 00:19:34.413 00:19:34.413 Latency(us) 00:19:34.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.413 =================================================================================================================== 00:19:34.413 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:34.413 19:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 334408 00:19:34.672 19:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:34.672 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:34.673 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:34.673 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:34.673 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:34.673 19:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 329454 00:19:34.673 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 329454 ']' 00:19:34.673 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 329454 00:19:34.673 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:34.673 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:34.673 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 329454 00:19:34.673 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:34.673 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:34.673 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 329454' 00:19:34.673 killing process with pid 329454 00:19:34.673 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 329454 00:19:34.673 [2024-07-12 19:11:37.099028] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:34.673 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 329454 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.gE3qqbMaoL 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.gE3qqbMaoL 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=334661 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 334661 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 334661 ']' 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.933 19:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.933 [2024-07-12 19:11:37.409354] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:19:34.933 [2024-07-12 19:11:37.409401] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.933 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.933 [2024-07-12 19:11:37.477677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.193 [2024-07-12 19:11:37.555047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.193 [2024-07-12 19:11:37.555079] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.193 [2024-07-12 19:11:37.555086] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.193 [2024-07-12 19:11:37.555092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.193 [2024-07-12 19:11:37.555097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.193 [2024-07-12 19:11:37.555128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.761 19:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.761 19:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:35.761 19:11:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:35.761 19:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:35.761 19:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.761 19:11:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.761 19:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.gE3qqbMaoL 00:19:35.761 19:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gE3qqbMaoL 00:19:35.761 19:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:36.021 [2024-07-12 19:11:38.398612] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.021 19:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:36.021 19:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:36.280 [2024-07-12 19:11:38.731477] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.280 [2024-07-12 19:11:38.731673] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.280 19:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:36.539 malloc0 00:19:36.539 19:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:36.539 19:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gE3qqbMaoL 00:19:36.799 [2024-07-12 19:11:39.232940] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gE3qqbMaoL 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gE3qqbMaoL' 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=334919 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 334919 /var/tmp/bdevperf.sock 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 334919 ']' 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.799 19:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.799 [2024-07-12 19:11:39.276139] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:19:36.799 [2024-07-12 19:11:39.276185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334919 ] 00:19:36.800 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.800 [2024-07-12 19:11:39.345027] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.059 [2024-07-12 19:11:39.424071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.629 19:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.629 19:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:37.629 19:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gE3qqbMaoL 00:19:37.888 [2024-07-12 19:11:40.295201] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.888 [2024-07-12 19:11:40.295281] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:37.888 TLSTESTn1 00:19:37.888 19:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:38.146 Running I/O for 10 seconds... 00:19:48.132 00:19:48.132 Latency(us) 00:19:48.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.132 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:48.132 Verification LBA range: start 0x0 length 0x2000 00:19:48.132 TLSTESTn1 : 10.02 4893.32 19.11 0.00 0.00 26119.65 5385.35 38295.82 00:19:48.132 =================================================================================================================== 00:19:48.132 Total : 4893.32 19.11 0.00 0.00 26119.65 5385.35 38295.82 00:19:48.132 0 00:19:48.132 19:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:48.132 19:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 334919 00:19:48.132 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 334919 ']' 00:19:48.132 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 334919 00:19:48.132 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:48.132 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.132 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 334919 00:19:48.132 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:48.132 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:48.132 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 334919' 00:19:48.132 killing process with pid 334919 00:19:48.132 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 334919 00:19:48.132 Received shutdown signal, test time was about 10.000000 seconds 00:19:48.132 00:19:48.132 Latency(us) 00:19:48.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.132 =================================================================================================================== 00:19:48.132 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:48.132 [2024-07-12 19:11:50.583302] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:48.132 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 334919 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.gE3qqbMaoL 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gE3qqbMaoL 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gE3qqbMaoL 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gE3qqbMaoL 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gE3qqbMaoL' 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=336762 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 336762 /var/tmp/bdevperf.sock 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 336762 ']' 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.393 19:11:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.393 [2024-07-12 19:11:50.814826] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:19:48.393 [2024-07-12 19:11:50.814872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336762 ] 00:19:48.393 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.393 [2024-07-12 19:11:50.882737] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.393 [2024-07-12 19:11:50.950899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.332 19:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.333 19:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:49.333 19:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gE3qqbMaoL 00:19:49.333 [2024-07-12 19:11:51.773712] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.333 [2024-07-12 19:11:51.773766] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:49.333 [2024-07-12 19:11:51.773776] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.gE3qqbMaoL 00:19:49.333 request: 00:19:49.333 { 00:19:49.333 "name": "TLSTEST", 00:19:49.333 "trtype": "tcp", 00:19:49.333 "traddr": "10.0.0.2", 00:19:49.333 "adrfam": "ipv4", 00:19:49.333 "trsvcid": "4420", 00:19:49.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.333 "prchk_reftag": false, 00:19:49.333 "prchk_guard": false, 00:19:49.333 "hdgst": false, 00:19:49.333 "ddgst": false, 00:19:49.333 "psk": "/tmp/tmp.gE3qqbMaoL", 00:19:49.333 "method": "bdev_nvme_attach_controller", 00:19:49.333 "req_id": 1 00:19:49.333 } 00:19:49.333 Got JSON-RPC error response 00:19:49.333 response: 00:19:49.333 { 00:19:49.333 "code": -1, 00:19:49.333 "message": "Operation not permitted" 00:19:49.333 } 00:19:49.333 19:11:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 336762 00:19:49.333 19:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 336762 ']' 00:19:49.333 19:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 336762 00:19:49.333 19:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:49.333 19:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.333 19:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 336762 00:19:49.333 19:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:49.333 19:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:49.333 19:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 336762' 00:19:49.333 killing process with pid 336762 00:19:49.333 19:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 336762 00:19:49.333 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.333 00:19:49.333 Latency(us) 00:19:49.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.333 =================================================================================================================== 00:19:49.333 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:49.333 19:11:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 336762 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 334661 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 334661 ']' 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 334661 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 334661 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 334661' 00:19:49.593 killing process with pid 334661 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 334661 00:19:49.593 [2024-07-12 19:11:52.063387] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:49.593 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 334661 00:19:49.853 19:11:52 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:49.853 19:11:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:49.853 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.853 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.853 19:11:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:49.853 19:11:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=337004 00:19:49.853 19:11:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 337004 00:19:49.853 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 337004 ']' 00:19:49.853 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.853 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.853 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.853 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.853 19:11:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.853 [2024-07-12 19:11:52.301581] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:19:49.853 [2024-07-12 19:11:52.301627] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.853 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.853 [2024-07-12 19:11:52.368857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.113 [2024-07-12 19:11:52.435985] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.113 [2024-07-12 19:11:52.436024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.113 [2024-07-12 19:11:52.436031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.113 [2024-07-12 19:11:52.436037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.113 [2024-07-12 19:11:52.436041] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.113 [2024-07-12 19:11:52.436078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.gE3qqbMaoL 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.gE3qqbMaoL 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.gE3qqbMaoL 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gE3qqbMaoL 00:19:50.682 19:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:50.941 [2024-07-12 19:11:53.307610] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.941 19:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:51.201 19:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:51.201 [2024-07-12 19:11:53.660507] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:51.201 [2024-07-12 19:11:53.660691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.201 19:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:51.460 malloc0 00:19:51.460 19:11:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gE3qqbMaoL 00:19:51.719 [2024-07-12 19:11:54.210050] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:51.719 [2024-07-12 19:11:54.210079] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:51.719 [2024-07-12 19:11:54.210117] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:51.719 request: 00:19:51.719 { 00:19:51.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.719 "host": "nqn.2016-06.io.spdk:host1", 00:19:51.719 "psk": "/tmp/tmp.gE3qqbMaoL", 00:19:51.719 "method": "nvmf_subsystem_add_host", 00:19:51.719 "req_id": 1 00:19:51.719 } 00:19:51.719 Got JSON-RPC error response 00:19:51.719 response: 00:19:51.719 { 00:19:51.719 "code": -32603, 00:19:51.719 "message": "Internal error" 00:19:51.719 } 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 337004 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 337004 ']' 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 337004 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 337004 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 337004' 00:19:51.719 killing process with pid 337004 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 337004 00:19:51.719 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 337004 00:19:51.979 19:11:54 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.gE3qqbMaoL 00:19:51.979 19:11:54 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:51.979 19:11:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:51.979 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:51.979 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.979 19:11:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=337487 00:19:51.979 19:11:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 337487 00:19:51.979 19:11:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:51.979 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 337487 ']' 00:19:51.979 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.979 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.979 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.979 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.979 19:11:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.979 [2024-07-12 19:11:54.536358] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:19:51.979 [2024-07-12 19:11:54.536404] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.239 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.239 [2024-07-12 19:11:54.605531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.239 [2024-07-12 19:11:54.673376] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.239 [2024-07-12 19:11:54.673415] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.239 [2024-07-12 19:11:54.673421] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.239 [2024-07-12 19:11:54.673427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.239 [2024-07-12 19:11:54.673435] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.239 [2024-07-12 19:11:54.673470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.808 19:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.808 19:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:52.808 19:11:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:52.808 19:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:52.808 19:11:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.067 19:11:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.067 19:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.gE3qqbMaoL 00:19:53.067 19:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gE3qqbMaoL 00:19:53.067 19:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:53.067 [2024-07-12 19:11:55.540443] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.067 19:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:53.326 19:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:53.326 [2024-07-12 19:11:55.889337] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.326 [2024-07-12 19:11:55.889544] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.586 19:11:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:53.586 malloc0 00:19:53.586 19:11:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:53.845 19:11:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gE3qqbMaoL 00:19:53.845 [2024-07-12 19:11:56.402969] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:54.104 19:11:56 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.104 19:11:56 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=337750 00:19:54.104 19:11:56 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.104 19:11:56 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 337750 /var/tmp/bdevperf.sock 00:19:54.104 19:11:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 337750 ']' 00:19:54.104 19:11:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.104 19:11:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.104 19:11:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.104 19:11:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.104 19:11:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.104 [2024-07-12 19:11:56.444127] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:19:54.104 [2024-07-12 19:11:56.444171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337750 ] 00:19:54.104 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.104 [2024-07-12 19:11:56.510425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.104 [2024-07-12 19:11:56.584423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.042 19:11:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.042 19:11:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:55.042 19:11:57 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gE3qqbMaoL 00:19:55.042 [2024-07-12 19:11:57.403188] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.042 [2024-07-12 19:11:57.403253] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:55.042 TLSTESTn1 00:19:55.042 19:11:57 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:55.302 19:11:57 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:55.302 "subsystems": [ 00:19:55.302 { 00:19:55.302 "subsystem": "keyring", 00:19:55.302 "config": [] 00:19:55.302 }, 00:19:55.302 { 00:19:55.302 "subsystem": "iobuf", 00:19:55.302 "config": [ 00:19:55.302 { 00:19:55.302 "method": "iobuf_set_options", 00:19:55.302 "params": { 00:19:55.302 "small_pool_count": 8192, 00:19:55.302 "large_pool_count": 1024, 00:19:55.302 "small_bufsize": 8192, 00:19:55.302 "large_bufsize": 135168 00:19:55.302 } 00:19:55.302 } 00:19:55.302 ] 00:19:55.302 }, 00:19:55.302 { 00:19:55.302 "subsystem": "sock", 00:19:55.302 "config": [ 00:19:55.302 { 00:19:55.302 "method": "sock_set_default_impl", 00:19:55.302 "params": { 00:19:55.302 "impl_name": "posix" 00:19:55.302 } 00:19:55.302 }, 00:19:55.302 { 00:19:55.302 "method": "sock_impl_set_options", 00:19:55.302 "params": { 00:19:55.302 "impl_name": "ssl", 00:19:55.302 "recv_buf_size": 4096, 00:19:55.302 "send_buf_size": 4096, 00:19:55.302 "enable_recv_pipe": true, 00:19:55.302 "enable_quickack": false, 00:19:55.302 "enable_placement_id": 0, 00:19:55.302 "enable_zerocopy_send_server": true, 00:19:55.302 "enable_zerocopy_send_client": false, 00:19:55.302 "zerocopy_threshold": 0, 00:19:55.302 "tls_version": 0, 00:19:55.302 "enable_ktls": false 00:19:55.302 } 00:19:55.302 }, 00:19:55.302 { 00:19:55.302 "method": "sock_impl_set_options", 00:19:55.302 "params": { 00:19:55.302 "impl_name": "posix", 00:19:55.302 "recv_buf_size": 2097152, 00:19:55.302 "send_buf_size": 2097152, 00:19:55.302 "enable_recv_pipe": true, 00:19:55.302 "enable_quickack": false, 00:19:55.302 "enable_placement_id": 0, 00:19:55.302 "enable_zerocopy_send_server": true, 00:19:55.302 "enable_zerocopy_send_client": false, 00:19:55.302 "zerocopy_threshold": 0, 00:19:55.302 "tls_version": 0, 00:19:55.302 "enable_ktls": false 00:19:55.302 } 00:19:55.302 } 00:19:55.302 ] 00:19:55.302 }, 00:19:55.302 { 00:19:55.302 "subsystem": "vmd", 00:19:55.302 "config": [] 00:19:55.302 }, 00:19:55.302 { 00:19:55.302 "subsystem": "accel", 00:19:55.302 "config": [ 00:19:55.302 { 00:19:55.302 "method": "accel_set_options", 00:19:55.302 "params": { 00:19:55.302 "small_cache_size": 128, 00:19:55.302 "large_cache_size": 16, 00:19:55.302 "task_count": 2048, 00:19:55.302 "sequence_count": 2048, 00:19:55.302 "buf_count": 2048 00:19:55.302 } 00:19:55.302 } 00:19:55.302 ] 00:19:55.302 }, 00:19:55.302 { 00:19:55.302 "subsystem": "bdev", 00:19:55.302 "config": [ 00:19:55.302 { 00:19:55.302 "method": "bdev_set_options", 00:19:55.302 "params": { 00:19:55.302 "bdev_io_pool_size": 65535, 00:19:55.302 "bdev_io_cache_size": 256, 00:19:55.302 "bdev_auto_examine": true, 00:19:55.302 "iobuf_small_cache_size": 128, 00:19:55.302 "iobuf_large_cache_size": 16 00:19:55.302 } 00:19:55.302 }, 00:19:55.302 { 00:19:55.302 "method": "bdev_raid_set_options", 00:19:55.303 "params": { 00:19:55.303 "process_window_size_kb": 1024 00:19:55.303 } 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "method": "bdev_iscsi_set_options", 00:19:55.303 "params": { 00:19:55.303 "timeout_sec": 30 00:19:55.303 } 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "method": "bdev_nvme_set_options", 00:19:55.303 "params": { 00:19:55.303 "action_on_timeout": "none", 00:19:55.303 "timeout_us": 0, 00:19:55.303 "timeout_admin_us": 0, 00:19:55.303 "keep_alive_timeout_ms": 10000, 00:19:55.303 "arbitration_burst": 0, 00:19:55.303 "low_priority_weight": 0, 00:19:55.303 "medium_priority_weight": 0, 00:19:55.303 "high_priority_weight": 0, 00:19:55.303 "nvme_adminq_poll_period_us": 10000, 00:19:55.303 "nvme_ioq_poll_period_us": 0, 00:19:55.303 "io_queue_requests": 0, 00:19:55.303 "delay_cmd_submit": true, 00:19:55.303 "transport_retry_count": 4, 00:19:55.303 "bdev_retry_count": 3, 00:19:55.303 "transport_ack_timeout": 0, 00:19:55.303 "ctrlr_loss_timeout_sec": 0, 00:19:55.303 "reconnect_delay_sec": 0, 00:19:55.303 "fast_io_fail_timeout_sec": 0, 00:19:55.303 "disable_auto_failback": false, 00:19:55.303 "generate_uuids": false, 00:19:55.303 "transport_tos": 0, 00:19:55.303 "nvme_error_stat": false, 00:19:55.303 "rdma_srq_size": 0, 00:19:55.303 "io_path_stat": false, 00:19:55.303 "allow_accel_sequence": false, 00:19:55.303 "rdma_max_cq_size": 0, 00:19:55.303 "rdma_cm_event_timeout_ms": 0, 00:19:55.303 "dhchap_digests": [ 00:19:55.303 "sha256", 00:19:55.303 "sha384", 00:19:55.303 "sha512" 00:19:55.303 ], 00:19:55.303 "dhchap_dhgroups": [ 00:19:55.303 "null", 00:19:55.303 "ffdhe2048", 00:19:55.303 "ffdhe3072", 00:19:55.303 "ffdhe4096", 00:19:55.303 "ffdhe6144", 00:19:55.303 "ffdhe8192" 00:19:55.303 ] 00:19:55.303 } 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "method": "bdev_nvme_set_hotplug", 00:19:55.303 "params": { 00:19:55.303 "period_us": 100000, 00:19:55.303 "enable": false 00:19:55.303 } 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "method": "bdev_malloc_create", 00:19:55.303 "params": { 00:19:55.303 "name": "malloc0", 00:19:55.303 "num_blocks": 8192, 00:19:55.303 "block_size": 4096, 00:19:55.303 "physical_block_size": 4096, 00:19:55.303 "uuid": "4277633b-0d5c-4e5f-bbfe-3aed38f09db0", 00:19:55.303 "optimal_io_boundary": 0 00:19:55.303 } 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "method": "bdev_wait_for_examine" 00:19:55.303 } 00:19:55.303 ] 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "subsystem": "nbd", 00:19:55.303 "config": [] 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "subsystem": "scheduler", 00:19:55.303 "config": [ 00:19:55.303 { 00:19:55.303 "method": "framework_set_scheduler", 00:19:55.303 "params": { 00:19:55.303 "name": "static" 00:19:55.303 } 00:19:55.303 } 00:19:55.303 ] 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "subsystem": "nvmf", 00:19:55.303 "config": [ 00:19:55.303 { 00:19:55.303 "method": "nvmf_set_config", 00:19:55.303 "params": { 00:19:55.303 "discovery_filter": "match_any", 00:19:55.303 "admin_cmd_passthru": { 00:19:55.303 "identify_ctrlr": false 00:19:55.303 } 00:19:55.303 } 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "method": "nvmf_set_max_subsystems", 00:19:55.303 "params": { 00:19:55.303 "max_subsystems": 1024 00:19:55.303 } 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "method": "nvmf_set_crdt", 00:19:55.303 "params": { 00:19:55.303 "crdt1": 0, 00:19:55.303 "crdt2": 0, 00:19:55.303 "crdt3": 0 00:19:55.303 } 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "method": "nvmf_create_transport", 00:19:55.303 "params": { 00:19:55.303 "trtype": "TCP", 00:19:55.303 "max_queue_depth": 128, 00:19:55.303 "max_io_qpairs_per_ctrlr": 127, 00:19:55.303 "in_capsule_data_size": 4096, 00:19:55.303 "max_io_size": 131072, 00:19:55.303 "io_unit_size": 131072, 00:19:55.303 "max_aq_depth": 128, 00:19:55.303 "num_shared_buffers": 511, 00:19:55.303 "buf_cache_size": 4294967295, 00:19:55.303 "dif_insert_or_strip": false, 00:19:55.303 "zcopy": false, 00:19:55.303 "c2h_success": false, 00:19:55.303 "sock_priority": 0, 00:19:55.303 "abort_timeout_sec": 1, 00:19:55.303 "ack_timeout": 0, 00:19:55.303 "data_wr_pool_size": 0 00:19:55.303 } 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "method": "nvmf_create_subsystem", 00:19:55.303 "params": { 00:19:55.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.303 "allow_any_host": false, 00:19:55.303 "serial_number": "SPDK00000000000001", 00:19:55.303 "model_number": "SPDK bdev Controller", 00:19:55.303 "max_namespaces": 10, 00:19:55.303 "min_cntlid": 1, 00:19:55.303 "max_cntlid": 65519, 00:19:55.303 "ana_reporting": false 00:19:55.303 } 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "method": "nvmf_subsystem_add_host", 00:19:55.303 "params": { 00:19:55.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.303 "host": "nqn.2016-06.io.spdk:host1", 00:19:55.303 "psk": "/tmp/tmp.gE3qqbMaoL" 00:19:55.303 } 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "method": "nvmf_subsystem_add_ns", 00:19:55.303 "params": { 00:19:55.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.303 "namespace": { 00:19:55.303 "nsid": 1, 00:19:55.303 "bdev_name": "malloc0", 00:19:55.303 "nguid": "4277633B0D5C4E5FBBFE3AED38F09DB0", 00:19:55.303 "uuid": "4277633b-0d5c-4e5f-bbfe-3aed38f09db0", 00:19:55.303 "no_auto_visible": false 00:19:55.303 } 00:19:55.303 } 00:19:55.303 }, 00:19:55.303 { 00:19:55.303 "method": "nvmf_subsystem_add_listener", 00:19:55.303 "params": { 00:19:55.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.303 "listen_address": { 00:19:55.303 "trtype": "TCP", 00:19:55.303 "adrfam": "IPv4", 00:19:55.303 "traddr": "10.0.0.2", 00:19:55.303 "trsvcid": "4420" 00:19:55.303 }, 00:19:55.303 "secure_channel": true 00:19:55.303 } 00:19:55.303 } 00:19:55.303 ] 00:19:55.303 } 00:19:55.303 ] 00:19:55.303 }' 00:19:55.303 19:11:57 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:55.563 19:11:57 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:55.563 "subsystems": [ 00:19:55.563 { 00:19:55.563 "subsystem": "keyring", 00:19:55.563 "config": [] 00:19:55.563 }, 00:19:55.563 { 00:19:55.563 "subsystem": "iobuf", 00:19:55.563 "config": [ 00:19:55.563 { 00:19:55.563 "method": "iobuf_set_options", 00:19:55.563 "params": { 00:19:55.563 "small_pool_count": 8192, 00:19:55.563 "large_pool_count": 1024, 00:19:55.563 "small_bufsize": 8192, 00:19:55.563 "large_bufsize": 135168 00:19:55.563 } 00:19:55.563 } 00:19:55.563 ] 00:19:55.563 }, 00:19:55.563 { 00:19:55.563 "subsystem": "sock", 00:19:55.563 "config": [ 00:19:55.563 { 00:19:55.563 "method": "sock_set_default_impl", 00:19:55.563 "params": { 00:19:55.563 "impl_name": "posix" 00:19:55.563 } 00:19:55.563 }, 00:19:55.563 { 00:19:55.563 "method": "sock_impl_set_options", 00:19:55.563 "params": { 00:19:55.563 "impl_name": "ssl", 00:19:55.563 "recv_buf_size": 4096, 00:19:55.563 "send_buf_size": 4096, 00:19:55.563 "enable_recv_pipe": true, 00:19:55.563 "enable_quickack": false, 00:19:55.563 "enable_placement_id": 0, 00:19:55.563 "enable_zerocopy_send_server": true, 00:19:55.563 "enable_zerocopy_send_client": false, 00:19:55.563 "zerocopy_threshold": 0, 00:19:55.563 "tls_version": 0, 00:19:55.563 "enable_ktls": false 00:19:55.563 } 00:19:55.563 }, 00:19:55.563 { 00:19:55.563 "method": "sock_impl_set_options", 00:19:55.563 "params": { 00:19:55.563 "impl_name": "posix", 00:19:55.563 "recv_buf_size": 2097152, 00:19:55.563 "send_buf_size": 2097152, 00:19:55.563 "enable_recv_pipe": true, 00:19:55.563 "enable_quickack": false, 00:19:55.563 "enable_placement_id": 0, 00:19:55.563 "enable_zerocopy_send_server": true, 00:19:55.563 "enable_zerocopy_send_client": false, 00:19:55.563 "zerocopy_threshold": 0, 00:19:55.563 "tls_version": 0, 00:19:55.563 "enable_ktls": false 00:19:55.563 } 00:19:55.563 } 00:19:55.563 ] 00:19:55.563 }, 00:19:55.563 { 00:19:55.563 "subsystem": "vmd", 00:19:55.563 "config": [] 00:19:55.563 }, 00:19:55.563 { 00:19:55.563 "subsystem": "accel", 00:19:55.563 "config": [ 00:19:55.563 { 00:19:55.563 "method": "accel_set_options", 00:19:55.563 "params": { 00:19:55.563 "small_cache_size": 128, 00:19:55.563 "large_cache_size": 16, 00:19:55.563 "task_count": 2048, 00:19:55.563 "sequence_count": 2048, 00:19:55.563 "buf_count": 2048 00:19:55.563 } 00:19:55.563 } 00:19:55.563 ] 00:19:55.563 }, 00:19:55.563 { 00:19:55.563 "subsystem": "bdev", 00:19:55.563 "config": [ 00:19:55.563 { 00:19:55.563 "method": "bdev_set_options", 00:19:55.563 "params": { 00:19:55.563 "bdev_io_pool_size": 65535, 00:19:55.563 "bdev_io_cache_size": 256, 00:19:55.563 "bdev_auto_examine": true, 00:19:55.563 "iobuf_small_cache_size": 128, 00:19:55.563 "iobuf_large_cache_size": 16 00:19:55.563 } 00:19:55.563 }, 00:19:55.563 { 00:19:55.563 "method": "bdev_raid_set_options", 00:19:55.563 "params": { 00:19:55.563 "process_window_size_kb": 1024 00:19:55.563 } 00:19:55.563 }, 00:19:55.563 { 00:19:55.563 "method": "bdev_iscsi_set_options", 00:19:55.563 "params": { 00:19:55.563 "timeout_sec": 30 00:19:55.563 } 00:19:55.563 }, 00:19:55.563 { 00:19:55.563 "method": "bdev_nvme_set_options", 00:19:55.563 "params": { 00:19:55.563 "action_on_timeout": "none", 00:19:55.563 "timeout_us": 0, 00:19:55.563 "timeout_admin_us": 0, 00:19:55.563 "keep_alive_timeout_ms": 10000, 00:19:55.563 "arbitration_burst": 0, 00:19:55.563 "low_priority_weight": 0, 00:19:55.563 "medium_priority_weight": 0, 00:19:55.563 "high_priority_weight": 0, 00:19:55.563 "nvme_adminq_poll_period_us": 10000, 00:19:55.563 "nvme_ioq_poll_period_us": 0, 00:19:55.563 "io_queue_requests": 512, 00:19:55.564 "delay_cmd_submit": true, 00:19:55.564 "transport_retry_count": 4, 00:19:55.564 "bdev_retry_count": 3, 00:19:55.564 "transport_ack_timeout": 0, 00:19:55.564 "ctrlr_loss_timeout_sec": 0, 00:19:55.564 "reconnect_delay_sec": 0, 00:19:55.564 "fast_io_fail_timeout_sec": 0, 00:19:55.564 "disable_auto_failback": false, 00:19:55.564 "generate_uuids": false, 00:19:55.564 "transport_tos": 0, 00:19:55.564 "nvme_error_stat": false, 00:19:55.564 "rdma_srq_size": 0, 00:19:55.564 "io_path_stat": false, 00:19:55.564 "allow_accel_sequence": false, 00:19:55.564 "rdma_max_cq_size": 0, 00:19:55.564 "rdma_cm_event_timeout_ms": 0, 00:19:55.564 "dhchap_digests": [ 00:19:55.564 "sha256", 00:19:55.564 "sha384", 00:19:55.564 "sha512" 00:19:55.564 ], 00:19:55.564 "dhchap_dhgroups": [ 00:19:55.564 "null", 00:19:55.564 "ffdhe2048", 00:19:55.564 "ffdhe3072", 00:19:55.564 "ffdhe4096", 00:19:55.564 "ffdhe6144", 00:19:55.564 "ffdhe8192" 00:19:55.564 ] 00:19:55.564 } 00:19:55.564 }, 00:19:55.564 { 00:19:55.564 "method": "bdev_nvme_attach_controller", 00:19:55.564 "params": { 00:19:55.564 "name": "TLSTEST", 00:19:55.564 "trtype": "TCP", 00:19:55.564 "adrfam": "IPv4", 00:19:55.564 "traddr": "10.0.0.2", 00:19:55.564 "trsvcid": "4420", 00:19:55.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.564 "prchk_reftag": false, 00:19:55.564 "prchk_guard": false, 00:19:55.564 "ctrlr_loss_timeout_sec": 0, 00:19:55.564 "reconnect_delay_sec": 0, 00:19:55.564 "fast_io_fail_timeout_sec": 0, 00:19:55.564 "psk": "/tmp/tmp.gE3qqbMaoL", 00:19:55.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.564 "hdgst": false, 00:19:55.564 "ddgst": false 00:19:55.564 } 00:19:55.564 }, 00:19:55.564 { 00:19:55.564 "method": "bdev_nvme_set_hotplug", 00:19:55.564 "params": { 00:19:55.564 "period_us": 100000, 00:19:55.564 "enable": false 00:19:55.564 } 00:19:55.564 }, 00:19:55.564 { 00:19:55.564 "method": "bdev_wait_for_examine" 00:19:55.564 } 00:19:55.564 ] 00:19:55.564 }, 00:19:55.564 { 00:19:55.564 "subsystem": "nbd", 00:19:55.564 "config": [] 00:19:55.564 } 00:19:55.564 ] 00:19:55.564 }' 00:19:55.564 19:11:57 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 337750 00:19:55.564 19:11:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 337750 ']' 00:19:55.564 19:11:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 337750 00:19:55.564 19:11:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:55.564 19:11:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.564 19:11:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 337750 00:19:55.564 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:55.564 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:55.564 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 337750' 00:19:55.564 killing process with pid 337750 00:19:55.564 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 337750 00:19:55.564 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.564 00:19:55.564 Latency(us) 00:19:55.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.564 =================================================================================================================== 00:19:55.564 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:55.564 [2024-07-12 19:11:58.041918] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:55.564 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 337750 00:19:55.823 19:11:58 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 337487 00:19:55.823 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 337487 ']' 00:19:55.823 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 337487 00:19:55.823 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:55.823 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.823 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 337487 00:19:55.823 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:55.823 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:55.823 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 337487' 00:19:55.823 killing process with pid 337487 00:19:55.823 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 337487 00:19:55.823 [2024-07-12 19:11:58.267433] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:55.823 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 337487 00:19:56.083 19:11:58 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:56.083 19:11:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:56.083 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.083 19:11:58 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:56.083 "subsystems": [ 00:19:56.083 { 00:19:56.083 "subsystem": "keyring", 00:19:56.083 "config": [] 00:19:56.083 }, 00:19:56.083 { 00:19:56.083 "subsystem": "iobuf", 00:19:56.083 "config": [ 00:19:56.083 { 00:19:56.083 "method": "iobuf_set_options", 00:19:56.083 "params": { 00:19:56.083 "small_pool_count": 8192, 00:19:56.083 "large_pool_count": 1024, 00:19:56.083 "small_bufsize": 8192, 00:19:56.083 "large_bufsize": 135168 00:19:56.083 } 00:19:56.083 } 00:19:56.083 ] 00:19:56.083 }, 00:19:56.083 { 00:19:56.083 "subsystem": "sock", 00:19:56.083 "config": [ 00:19:56.083 { 00:19:56.083 "method": "sock_set_default_impl", 00:19:56.083 "params": { 00:19:56.083 "impl_name": "posix" 00:19:56.083 } 00:19:56.083 }, 00:19:56.083 { 00:19:56.083 "method": "sock_impl_set_options", 00:19:56.083 "params": { 00:19:56.083 "impl_name": "ssl", 00:19:56.083 "recv_buf_size": 4096, 00:19:56.083 "send_buf_size": 4096, 00:19:56.083 "enable_recv_pipe": true, 00:19:56.083 "enable_quickack": false, 00:19:56.083 "enable_placement_id": 0, 00:19:56.083 "enable_zerocopy_send_server": true, 00:19:56.083 "enable_zerocopy_send_client": false, 00:19:56.083 "zerocopy_threshold": 0, 00:19:56.083 "tls_version": 0, 00:19:56.083 "enable_ktls": false 00:19:56.083 } 00:19:56.083 }, 00:19:56.083 { 00:19:56.083 "method": "sock_impl_set_options", 00:19:56.083 "params": { 00:19:56.083 "impl_name": "posix", 00:19:56.083 "recv_buf_size": 2097152, 00:19:56.083 "send_buf_size": 2097152, 00:19:56.083 "enable_recv_pipe": true, 00:19:56.083 "enable_quickack": false, 00:19:56.083 "enable_placement_id": 0, 00:19:56.083 "enable_zerocopy_send_server": true, 00:19:56.083 "enable_zerocopy_send_client": false, 00:19:56.083 "zerocopy_threshold": 0, 00:19:56.083 "tls_version": 0, 00:19:56.083 "enable_ktls": false 00:19:56.083 } 00:19:56.083 } 00:19:56.083 ] 00:19:56.083 }, 00:19:56.083 { 00:19:56.083 "subsystem": "vmd", 00:19:56.083 "config": [] 00:19:56.083 }, 00:19:56.083 { 00:19:56.083 "subsystem": "accel", 00:19:56.083 "config": [ 00:19:56.083 { 00:19:56.083 "method": "accel_set_options", 00:19:56.083 "params": { 00:19:56.083 "small_cache_size": 128, 00:19:56.083 "large_cache_size": 16, 00:19:56.083 "task_count": 2048, 00:19:56.083 "sequence_count": 2048, 00:19:56.083 "buf_count": 2048 00:19:56.083 } 00:19:56.083 } 00:19:56.083 ] 00:19:56.083 }, 00:19:56.083 { 00:19:56.083 "subsystem": "bdev", 00:19:56.083 "config": [ 00:19:56.083 { 00:19:56.083 "method": "bdev_set_options", 00:19:56.083 "params": { 00:19:56.083 "bdev_io_pool_size": 65535, 00:19:56.083 "bdev_io_cache_size": 256, 00:19:56.083 "bdev_auto_examine": true, 00:19:56.083 "iobuf_small_cache_size": 128, 00:19:56.083 "iobuf_large_cache_size": 16 00:19:56.083 } 00:19:56.083 }, 00:19:56.083 { 00:19:56.083 "method": "bdev_raid_set_options", 00:19:56.083 "params": { 00:19:56.083 "process_window_size_kb": 1024 00:19:56.083 } 00:19:56.083 }, 00:19:56.083 { 00:19:56.083 "method": "bdev_iscsi_set_options", 00:19:56.083 "params": { 00:19:56.083 "timeout_sec": 30 00:19:56.083 } 00:19:56.083 }, 00:19:56.083 { 00:19:56.083 "method": "bdev_nvme_set_options", 00:19:56.083 "params": { 00:19:56.083 "action_on_timeout": "none", 00:19:56.083 "timeout_us": 0, 00:19:56.083 "timeout_admin_us": 0, 00:19:56.083 "keep_alive_timeout_ms": 10000, 00:19:56.083 "arbitration_burst": 0, 00:19:56.083 "low_priority_weight": 0, 00:19:56.083 "medium_priority_weight": 0, 00:19:56.083 "high_priority_weight": 0, 00:19:56.083 "nvme_adminq_poll_period_us": 10000, 00:19:56.083 "nvme_ioq_poll_period_us": 0, 00:19:56.083 "io_queue_requests": 0, 00:19:56.083 "delay_cmd_submit": true, 00:19:56.083 "transport_retry_count": 4, 00:19:56.083 "bdev_retry_count": 3, 00:19:56.083 "transport_ack_timeout": 0, 00:19:56.083 "ctrlr_loss_timeout_sec": 0, 00:19:56.083 "reconnect_delay_sec": 0, 00:19:56.083 "fast_io_fail_timeout_sec": 0, 00:19:56.083 "disable_auto_failback": false, 00:19:56.083 "generate_uuids": false, 00:19:56.083 "transport_tos": 0, 00:19:56.083 "nvme_error_stat": false, 00:19:56.083 "rdma_srq_size": 0, 00:19:56.083 "io_path_stat": false, 00:19:56.084 "allow_accel_sequence": false, 00:19:56.084 "rdma_max_cq_size": 0, 00:19:56.084 "rdma_cm_event_timeout_ms": 0, 00:19:56.084 "dhchap_digests": [ 00:19:56.084 "sha256", 00:19:56.084 "sha384", 00:19:56.084 "sha512" 00:19:56.084 ], 00:19:56.084 "dhchap_dhgroups": [ 00:19:56.084 "null", 00:19:56.084 "ffdhe2048", 00:19:56.084 "ffdhe3072", 00:19:56.084 "ffdhe4096", 00:19:56.084 "ffdhe6144", 00:19:56.084 "ffdhe8192" 00:19:56.084 ] 00:19:56.084 } 00:19:56.084 }, 00:19:56.084 { 00:19:56.084 "method": "bdev_nvme_set_hotplug", 00:19:56.084 "params": { 00:19:56.084 "period_us": 100000, 00:19:56.084 "enable": false 00:19:56.084 } 00:19:56.084 }, 00:19:56.084 { 00:19:56.084 "method": "bdev_malloc_create", 00:19:56.084 "params": { 00:19:56.084 "name": "malloc0", 00:19:56.084 "num_blocks": 8192, 00:19:56.084 "block_size": 4096, 00:19:56.084 "physical_block_size": 4096, 00:19:56.084 "uuid": "4277633b-0d5c-4e5f-bbfe-3aed38f09db0", 00:19:56.084 "optimal_io_boundary": 0 00:19:56.084 } 00:19:56.084 }, 00:19:56.084 { 00:19:56.084 "method": "bdev_wait_for_examine" 00:19:56.084 } 00:19:56.084 ] 00:19:56.084 }, 00:19:56.084 { 00:19:56.084 "subsystem": "nbd", 00:19:56.084 "config": [] 00:19:56.084 }, 00:19:56.084 { 00:19:56.084 "subsystem": "scheduler", 00:19:56.084 "config": [ 00:19:56.084 { 00:19:56.084 "method": "framework_set_scheduler", 00:19:56.084 "params": { 00:19:56.084 "name": "static" 00:19:56.084 } 00:19:56.084 } 00:19:56.084 ] 00:19:56.084 }, 00:19:56.084 { 00:19:56.084 "subsystem": "nvmf", 00:19:56.084 "config": [ 00:19:56.084 { 00:19:56.084 "method": "nvmf_set_config", 00:19:56.084 "params": { 00:19:56.084 "discovery_filter": "match_any", 00:19:56.084 "admin_cmd_passthru": { 00:19:56.084 "identify_ctrlr": false 00:19:56.084 } 00:19:56.084 } 00:19:56.084 }, 00:19:56.084 { 00:19:56.084 "method": "nvmf_set_max_subsystems", 00:19:56.084 "params": { 00:19:56.084 "max_subsystems": 1024 00:19:56.084 } 00:19:56.084 }, 00:19:56.084 { 00:19:56.084 "method": "nvmf_set_crdt", 00:19:56.084 "params": { 00:19:56.084 "crdt1": 0, 00:19:56.084 "crdt2": 0, 00:19:56.084 "crdt3": 0 00:19:56.084 } 00:19:56.084 }, 00:19:56.084 { 00:19:56.084 "method": "nvmf_create_transport", 00:19:56.084 "params": { 00:19:56.084 "trtype": "TCP", 00:19:56.084 "max_queue_depth": 128, 00:19:56.084 "max_io_qpairs_per_ctrlr": 127, 00:19:56.084 "in_capsule_data_size": 4096, 00:19:56.084 "max_io_size": 131072, 00:19:56.084 "io_unit_size": 131072, 00:19:56.084 "max_aq_depth": 128, 00:19:56.084 "num_shared_buffers": 511, 00:19:56.084 "buf_cache_size": 4294967295, 00:19:56.084 "dif_insert_or_strip": false, 00:19:56.084 "zcopy": false, 00:19:56.084 "c2h_success": false, 00:19:56.084 "sock_priority": 0, 00:19:56.084 "abort_timeout_sec": 1, 00:19:56.084 "ack_timeout": 0, 00:19:56.084 "data_wr_pool_size": 0 00:19:56.084 } 00:19:56.084 }, 00:19:56.084 { 00:19:56.084 "method": "nvmf_create_subsystem", 00:19:56.084 "params": { 00:19:56.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.084 "allow_any_host": false, 00:19:56.084 "serial_number": "SPDK00000000000001", 00:19:56.084 "model_number": "SPDK bdev Controller", 00:19:56.084 "max_namespaces": 10, 00:19:56.084 "min_cntlid": 1, 00:19:56.084 "max_cntlid": 65519, 00:19:56.084 "ana_reporting": false 00:19:56.084 } 00:19:56.084 }, 00:19:56.084 { 00:19:56.084 "method": "nvmf_subsystem_add_host", 00:19:56.084 "params": { 00:19:56.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.084 "host": "nqn.2016-06.io.spdk:host1", 00:19:56.084 "psk": "/tmp/tmp.gE3qqbMaoL" 00:19:56.084 } 00:19:56.084 }, 00:19:56.084 { 00:19:56.084 "method": "nvmf_subsystem_add_ns", 00:19:56.084 "params": { 00:19:56.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.084 "namespace": { 00:19:56.084 "nsid": 1, 00:19:56.084 "bdev_name": "malloc0", 00:19:56.084 "nguid": "4277633B0D5C4E5FBBFE3AED38F09DB0", 00:19:56.084 "uuid": "4277633b-0d5c-4e5f-bbfe-3aed38f09db0", 00:19:56.084 "no_auto_visible": false 00:19:56.084 } 00:19:56.084 } 00:19:56.084 }, 00:19:56.084 { 00:19:56.084 "method": "nvmf_subsystem_add_listener", 00:19:56.084 "params": { 00:19:56.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.084 "listen_address": { 00:19:56.084 "trtype": "TCP", 00:19:56.084 "adrfam": "IPv4", 00:19:56.084 "traddr": "10.0.0.2", 00:19:56.084 "trsvcid": "4420" 00:19:56.084 }, 00:19:56.084 "secure_channel": true 00:19:56.084 } 00:19:56.084 } 00:19:56.084 ] 00:19:56.084 } 00:19:56.084 ] 00:19:56.084 }' 00:19:56.084 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.084 19:11:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=338216 00:19:56.084 19:11:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 338216 00:19:56.084 19:11:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:56.084 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 338216 ']' 00:19:56.084 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.084 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.084 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.084 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.084 19:11:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.084 [2024-07-12 19:11:58.515575] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:19:56.084 [2024-07-12 19:11:58.515618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.084 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.084 [2024-07-12 19:11:58.581804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.343 [2024-07-12 19:11:58.660295] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.343 [2024-07-12 19:11:58.660328] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.343 [2024-07-12 19:11:58.660335] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.343 [2024-07-12 19:11:58.660342] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.343 [2024-07-12 19:11:58.660347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.343 [2024-07-12 19:11:58.660408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.343 [2024-07-12 19:11:58.861961] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.343 [2024-07-12 19:11:58.877934] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:56.343 [2024-07-12 19:11:58.893981] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.343 [2024-07-12 19:11:58.904541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.911 19:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.911 19:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:56.911 19:11:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:56.911 19:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.911 19:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.911 19:11:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.911 19:11:59 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=338252 00:19:56.911 19:11:59 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 338252 /var/tmp/bdevperf.sock 00:19:56.911 19:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 338252 ']' 00:19:56.911 19:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.911 19:11:59 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:56.911 19:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.911 19:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.911 19:11:59 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:56.911 "subsystems": [ 00:19:56.911 { 00:19:56.911 "subsystem": "keyring", 00:19:56.911 "config": [] 00:19:56.911 }, 00:19:56.911 { 00:19:56.911 "subsystem": "iobuf", 00:19:56.911 "config": [ 00:19:56.911 { 00:19:56.911 "method": "iobuf_set_options", 00:19:56.911 "params": { 00:19:56.911 "small_pool_count": 8192, 00:19:56.911 "large_pool_count": 1024, 00:19:56.911 "small_bufsize": 8192, 00:19:56.911 "large_bufsize": 135168 00:19:56.911 } 00:19:56.911 } 00:19:56.911 ] 00:19:56.911 }, 00:19:56.911 { 00:19:56.911 "subsystem": "sock", 00:19:56.911 "config": [ 00:19:56.911 { 00:19:56.911 "method": "sock_set_default_impl", 00:19:56.911 "params": { 00:19:56.911 "impl_name": "posix" 00:19:56.911 } 00:19:56.911 }, 00:19:56.911 { 00:19:56.911 "method": "sock_impl_set_options", 00:19:56.911 "params": { 00:19:56.911 "impl_name": "ssl", 00:19:56.912 "recv_buf_size": 4096, 00:19:56.912 "send_buf_size": 4096, 00:19:56.912 "enable_recv_pipe": true, 00:19:56.912 "enable_quickack": false, 00:19:56.912 "enable_placement_id": 0, 00:19:56.912 "enable_zerocopy_send_server": true, 00:19:56.912 "enable_zerocopy_send_client": false, 00:19:56.912 "zerocopy_threshold": 0, 00:19:56.912 "tls_version": 0, 00:19:56.912 "enable_ktls": false 00:19:56.912 } 00:19:56.912 }, 00:19:56.912 { 00:19:56.912 "method": "sock_impl_set_options", 00:19:56.912 "params": { 00:19:56.912 "impl_name": "posix", 00:19:56.912 "recv_buf_size": 2097152, 00:19:56.912 "send_buf_size": 2097152, 00:19:56.912 "enable_recv_pipe": true, 00:19:56.912 "enable_quickack": false, 00:19:56.912 "enable_placement_id": 0, 00:19:56.912 "enable_zerocopy_send_server": true, 00:19:56.912 "enable_zerocopy_send_client": false, 00:19:56.912 "zerocopy_threshold": 0, 00:19:56.912 "tls_version": 0, 00:19:56.912 "enable_ktls": false 00:19:56.912 } 00:19:56.912 } 00:19:56.912 ] 00:19:56.912 }, 00:19:56.912 { 00:19:56.912 "subsystem": "vmd", 00:19:56.912 "config": [] 00:19:56.912 }, 00:19:56.912 { 00:19:56.912 "subsystem": "accel", 00:19:56.912 "config": [ 00:19:56.912 { 00:19:56.912 "method": "accel_set_options", 00:19:56.912 "params": { 00:19:56.912 "small_cache_size": 128, 00:19:56.912 "large_cache_size": 16, 00:19:56.912 "task_count": 2048, 00:19:56.912 "sequence_count": 2048, 00:19:56.912 "buf_count": 2048 00:19:56.912 } 00:19:56.912 } 00:19:56.912 ] 00:19:56.912 }, 00:19:56.912 { 00:19:56.912 "subsystem": "bdev", 00:19:56.912 "config": [ 00:19:56.912 { 00:19:56.912 "method": "bdev_set_options", 00:19:56.912 "params": { 00:19:56.912 "bdev_io_pool_size": 65535, 00:19:56.912 "bdev_io_cache_size": 256, 00:19:56.912 "bdev_auto_examine": true, 00:19:56.912 "iobuf_small_cache_size": 128, 00:19:56.912 "iobuf_large_cache_size": 16 00:19:56.912 } 00:19:56.912 }, 00:19:56.912 { 00:19:56.912 "method": "bdev_raid_set_options", 00:19:56.912 "params": { 00:19:56.912 "process_window_size_kb": 1024 00:19:56.912 } 00:19:56.912 }, 00:19:56.912 { 00:19:56.912 "method": "bdev_iscsi_set_options", 00:19:56.912 "params": { 00:19:56.912 "timeout_sec": 30 00:19:56.912 } 00:19:56.912 }, 00:19:56.912 { 00:19:56.912 "method": "bdev_nvme_set_options", 00:19:56.912 "params": { 00:19:56.912 "action_on_timeout": "none", 00:19:56.912 "timeout_us": 0, 00:19:56.912 "timeout_admin_us": 0, 00:19:56.912 "keep_alive_timeout_ms": 10000, 00:19:56.912 "arbitration_burst": 0, 00:19:56.912 "low_priority_weight": 0, 00:19:56.912 "medium_priority_weight": 0, 00:19:56.912 "high_priority_weight": 0, 00:19:56.912 "nvme_adminq_poll_period_us": 10000, 00:19:56.912 "nvme_ioq_poll_period_us": 0, 00:19:56.912 "io_queue_requests": 512, 00:19:56.912 "delay_cmd_submit": true, 00:19:56.912 "transport_retry_count": 4, 00:19:56.912 "bdev_retry_count": 3, 00:19:56.912 "transport_ack_timeout": 0, 00:19:56.912 "ctrlr_loss_timeout_sec": 0, 00:19:56.912 "reconnect_delay_sec": 0, 00:19:56.912 "fast_io_fail_timeout_sec": 0, 00:19:56.912 "disable_auto_failback": false, 00:19:56.912 "generate_uuids": false, 00:19:56.912 "transport_tos": 0, 00:19:56.912 "nvme_error_stat": false, 00:19:56.912 "rdma_srq_size": 0, 00:19:56.912 "io_path_stat": false, 00:19:56.912 "allow_accel_sequence": false, 00:19:56.912 "rdma_max_cq_size": 0, 00:19:56.912 "rdma_cm_event_timeout_ms": 0, 00:19:56.912 "dhchap_digests": [ 00:19:56.912 "sha256", 00:19:56.912 "sha384", 00:19:56.912 "sha512" 00:19:56.912 ], 00:19:56.912 "dhchap_dhgroups": [ 00:19:56.912 "null", 00:19:56.912 "ffdhe2048", 00:19:56.912 "ffdhe3072", 00:19:56.912 "ffdhe4096", 00:19:56.912 "ffdhe6144", 00:19:56.912 "ffdhe8192" 00:19:56.912 ] 00:19:56.912 } 00:19:56.912 }, 00:19:56.912 { 00:19:56.912 "method": "bdev_nvme_attach_controller", 00:19:56.912 "params": { 00:19:56.912 "name": "TLSTEST", 00:19:56.912 "trtype": "TCP", 00:19:56.912 "adrfam": "IPv4", 00:19:56.912 "traddr": "10.0.0.2", 00:19:56.912 "trsvcid": "4420", 00:19:56.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.912 "prchk_reftag": false, 00:19:56.912 "prchk_guard": false, 00:19:56.912 "ctrlr_loss_timeout_sec": 0, 00:19:56.912 "reconnect_delay_sec": 0, 00:19:56.912 "fast_io_fail_timeout_sec": 0, 00:19:56.912 "psk": "/tmp/tmp.gE3qqbMaoL", 00:19:56.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.912 "hdgst": false, 00:19:56.912 "ddgst": false 00:19:56.912 } 00:19:56.912 }, 00:19:56.912 { 00:19:56.912 "method": "bdev_nvme_set_hotplug", 00:19:56.912 "params": { 00:19:56.912 "period_us": 100000, 00:19:56.912 "enable": false 00:19:56.912 } 00:19:56.912 }, 00:19:56.912 { 00:19:56.912 "method": "bdev_wait_for_examine" 00:19:56.912 } 00:19:56.912 ] 00:19:56.912 }, 00:19:56.912 { 00:19:56.912 "subsystem": "nbd", 00:19:56.912 "config": [] 00:19:56.912 } 00:19:56.912 ] 00:19:56.912 }' 00:19:56.912 19:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.912 19:11:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.912 [2024-07-12 19:11:59.393077] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:19:56.912 [2024-07-12 19:11:59.393125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid338252 ] 00:19:56.912 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.912 [2024-07-12 19:11:59.461806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.171 [2024-07-12 19:11:59.541441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.171 [2024-07-12 19:11:59.684492] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.171 [2024-07-12 19:11:59.684579] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:57.740 19:12:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.740 19:12:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:57.740 19:12:00 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:57.740 Running I/O for 10 seconds... 00:20:09.951 00:20:09.952 Latency(us) 00:20:09.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.952 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:09.952 Verification LBA range: start 0x0 length 0x2000 00:20:09.952 TLSTESTn1 : 10.01 5119.32 20.00 0.00 0.00 24965.13 5499.33 32824.99 00:20:09.952 =================================================================================================================== 00:20:09.952 Total : 5119.32 20.00 0.00 0.00 24965.13 5499.33 32824.99 00:20:09.952 0 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 338252 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 338252 ']' 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 338252 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 338252 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 338252' 00:20:09.952 killing process with pid 338252 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 338252 00:20:09.952 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.952 00:20:09.952 Latency(us) 00:20:09.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.952 =================================================================================================================== 00:20:09.952 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.952 [2024-07-12 19:12:10.402749] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 338252 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 338216 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 338216 ']' 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 338216 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 338216 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 338216' 00:20:09.952 killing process with pid 338216 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 338216 00:20:09.952 [2024-07-12 19:12:10.625644] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 338216 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=340613 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 340613 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 340613 ']' 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.952 19:12:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.952 [2024-07-12 19:12:10.871477] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:20:09.952 [2024-07-12 19:12:10.871524] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.952 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.952 [2024-07-12 19:12:10.943783] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.952 [2024-07-12 19:12:11.020542] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.952 [2024-07-12 19:12:11.020580] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.952 [2024-07-12 19:12:11.020586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.952 [2024-07-12 19:12:11.020592] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.952 [2024-07-12 19:12:11.020596] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.952 [2024-07-12 19:12:11.020635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.952 19:12:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.952 19:12:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:09.952 19:12:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:09.952 19:12:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:09.952 19:12:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.952 19:12:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.952 19:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.gE3qqbMaoL 00:20:09.952 19:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gE3qqbMaoL 00:20:09.952 19:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:09.952 [2024-07-12 19:12:11.864851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.952 19:12:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:09.952 19:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:09.952 [2024-07-12 19:12:12.213728] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.952 [2024-07-12 19:12:12.213920] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.952 19:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:09.952 malloc0 00:20:09.952 19:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:10.212 19:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gE3qqbMaoL 00:20:10.212 [2024-07-12 19:12:12.755414] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:10.470 19:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=341082 00:20:10.470 19:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:10.470 19:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:10.470 19:12:12 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 341082 /var/tmp/bdevperf.sock 00:20:10.470 19:12:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 341082 ']' 00:20:10.470 19:12:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.470 19:12:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.470 19:12:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.470 19:12:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.470 19:12:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.470 [2024-07-12 19:12:12.826221] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:20:10.470 [2024-07-12 19:12:12.826282] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341082 ] 00:20:10.470 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.470 [2024-07-12 19:12:12.894689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.470 [2024-07-12 19:12:12.968219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.407 19:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.407 19:12:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:11.407 19:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gE3qqbMaoL 00:20:11.407 19:12:13 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:11.407 [2024-07-12 19:12:13.972383] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.666 nvme0n1 00:20:11.667 19:12:14 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:11.667 Running I/O for 1 seconds... 00:20:12.603 00:20:12.603 Latency(us) 00:20:12.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.603 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:12.603 Verification LBA range: start 0x0 length 0x2000 00:20:12.603 nvme0n1 : 1.01 4913.56 19.19 0.00 0.00 25878.21 5613.30 79782.96 00:20:12.603 =================================================================================================================== 00:20:12.603 Total : 4913.56 19.19 0.00 0.00 25878.21 5613.30 79782.96 00:20:12.603 0 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 341082 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 341082 ']' 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 341082 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 341082 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 341082' 00:20:12.863 killing process with pid 341082 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 341082 00:20:12.863 Received shutdown signal, test time was about 1.000000 seconds 00:20:12.863 00:20:12.863 Latency(us) 00:20:12.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.863 =================================================================================================================== 00:20:12.863 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 341082 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 340613 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 340613 ']' 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 340613 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.863 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 340613 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 340613' 00:20:13.122 killing process with pid 340613 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 340613 00:20:13.122 [2024-07-12 19:12:15.457080] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 340613 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=341556 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 341556 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 341556 ']' 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.122 19:12:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.381 [2024-07-12 19:12:15.712204] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:20:13.381 [2024-07-12 19:12:15.712259] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.381 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.381 [2024-07-12 19:12:15.780357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.381 [2024-07-12 19:12:15.857539] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.381 [2024-07-12 19:12:15.857576] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.381 [2024-07-12 19:12:15.857583] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.381 [2024-07-12 19:12:15.857589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.381 [2024-07-12 19:12:15.857593] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.381 [2024-07-12 19:12:15.857614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.950 19:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.950 19:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:13.950 19:12:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:13.950 19:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.950 19:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.210 19:12:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.210 19:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:14.210 19:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.210 19:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.210 [2024-07-12 19:12:16.553432] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.210 malloc0 00:20:14.210 [2024-07-12 19:12:16.581777] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.210 [2024-07-12 19:12:16.581970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.210 19:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.210 19:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=341653 00:20:14.210 19:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 341653 /var/tmp/bdevperf.sock 00:20:14.210 19:12:16 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:14.210 19:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 341653 ']' 00:20:14.210 19:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.210 19:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.210 19:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.210 19:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.210 19:12:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.210 [2024-07-12 19:12:16.655108] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:20:14.210 [2024-07-12 19:12:16.655150] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341653 ] 00:20:14.210 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.210 [2024-07-12 19:12:16.719514] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.470 [2024-07-12 19:12:16.793395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.038 19:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.038 19:12:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:15.038 19:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gE3qqbMaoL 00:20:15.297 19:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:15.298 [2024-07-12 19:12:17.809229] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.557 nvme0n1 00:20:15.557 19:12:17 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.557 Running I/O for 1 seconds... 00:20:16.496 00:20:16.496 Latency(us) 00:20:16.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.496 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:16.496 Verification LBA range: start 0x0 length 0x2000 00:20:16.496 nvme0n1 : 1.01 4880.61 19.06 0.00 0.00 26045.80 5613.30 39435.58 00:20:16.496 =================================================================================================================== 00:20:16.496 Total : 4880.61 19.06 0.00 0.00 26045.80 5613.30 39435.58 00:20:16.496 0 00:20:16.496 19:12:19 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:16.496 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.496 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.756 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.756 19:12:19 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:16.756 "subsystems": [ 00:20:16.756 { 00:20:16.756 "subsystem": "keyring", 00:20:16.756 "config": [ 00:20:16.756 { 00:20:16.756 "method": "keyring_file_add_key", 00:20:16.756 "params": { 00:20:16.756 "name": "key0", 00:20:16.756 "path": "/tmp/tmp.gE3qqbMaoL" 00:20:16.756 } 00:20:16.756 } 00:20:16.756 ] 00:20:16.756 }, 00:20:16.756 { 00:20:16.756 "subsystem": "iobuf", 00:20:16.756 "config": [ 00:20:16.756 { 00:20:16.756 "method": "iobuf_set_options", 00:20:16.756 "params": { 00:20:16.756 "small_pool_count": 8192, 00:20:16.756 "large_pool_count": 1024, 00:20:16.756 "small_bufsize": 8192, 00:20:16.756 "large_bufsize": 135168 00:20:16.756 } 00:20:16.756 } 00:20:16.756 ] 00:20:16.756 }, 00:20:16.756 { 00:20:16.756 "subsystem": "sock", 00:20:16.756 "config": [ 00:20:16.756 { 00:20:16.756 "method": "sock_set_default_impl", 00:20:16.756 "params": { 00:20:16.756 "impl_name": "posix" 00:20:16.756 } 00:20:16.756 }, 00:20:16.756 { 00:20:16.756 "method": "sock_impl_set_options", 00:20:16.756 "params": { 00:20:16.756 "impl_name": "ssl", 00:20:16.756 "recv_buf_size": 4096, 00:20:16.756 "send_buf_size": 4096, 00:20:16.756 "enable_recv_pipe": true, 00:20:16.756 "enable_quickack": false, 00:20:16.756 "enable_placement_id": 0, 00:20:16.756 "enable_zerocopy_send_server": true, 00:20:16.756 "enable_zerocopy_send_client": false, 00:20:16.756 "zerocopy_threshold": 0, 00:20:16.756 "tls_version": 0, 00:20:16.756 "enable_ktls": false 00:20:16.756 } 00:20:16.756 }, 00:20:16.756 { 00:20:16.756 "method": "sock_impl_set_options", 00:20:16.756 "params": { 00:20:16.756 "impl_name": "posix", 00:20:16.756 "recv_buf_size": 2097152, 00:20:16.756 "send_buf_size": 2097152, 00:20:16.756 "enable_recv_pipe": true, 00:20:16.756 "enable_quickack": false, 00:20:16.756 "enable_placement_id": 0, 00:20:16.756 "enable_zerocopy_send_server": true, 00:20:16.756 "enable_zerocopy_send_client": false, 00:20:16.756 "zerocopy_threshold": 0, 00:20:16.756 "tls_version": 0, 00:20:16.756 "enable_ktls": false 00:20:16.756 } 00:20:16.756 } 00:20:16.756 ] 00:20:16.756 }, 00:20:16.756 { 00:20:16.756 "subsystem": "vmd", 00:20:16.756 "config": [] 00:20:16.756 }, 00:20:16.756 { 00:20:16.756 "subsystem": "accel", 00:20:16.756 "config": [ 00:20:16.756 { 00:20:16.756 "method": "accel_set_options", 00:20:16.756 "params": { 00:20:16.756 "small_cache_size": 128, 00:20:16.756 "large_cache_size": 16, 00:20:16.756 "task_count": 2048, 00:20:16.756 "sequence_count": 2048, 00:20:16.756 "buf_count": 2048 00:20:16.756 } 00:20:16.756 } 00:20:16.756 ] 00:20:16.756 }, 00:20:16.756 { 00:20:16.756 "subsystem": "bdev", 00:20:16.756 "config": [ 00:20:16.756 { 00:20:16.756 "method": "bdev_set_options", 00:20:16.756 "params": { 00:20:16.756 "bdev_io_pool_size": 65535, 00:20:16.756 "bdev_io_cache_size": 256, 00:20:16.756 "bdev_auto_examine": true, 00:20:16.756 "iobuf_small_cache_size": 128, 00:20:16.756 "iobuf_large_cache_size": 16 00:20:16.756 } 00:20:16.756 }, 00:20:16.756 { 00:20:16.756 "method": "bdev_raid_set_options", 00:20:16.756 "params": { 00:20:16.756 "process_window_size_kb": 1024 00:20:16.756 } 00:20:16.756 }, 00:20:16.756 { 00:20:16.756 "method": "bdev_iscsi_set_options", 00:20:16.756 "params": { 00:20:16.756 "timeout_sec": 30 00:20:16.756 } 00:20:16.756 }, 00:20:16.756 { 00:20:16.756 "method": "bdev_nvme_set_options", 00:20:16.756 "params": { 00:20:16.756 "action_on_timeout": "none", 00:20:16.756 "timeout_us": 0, 00:20:16.756 "timeout_admin_us": 0, 00:20:16.756 "keep_alive_timeout_ms": 10000, 00:20:16.756 "arbitration_burst": 0, 00:20:16.756 "low_priority_weight": 0, 00:20:16.756 "medium_priority_weight": 0, 00:20:16.756 "high_priority_weight": 0, 00:20:16.756 "nvme_adminq_poll_period_us": 10000, 00:20:16.756 "nvme_ioq_poll_period_us": 0, 00:20:16.756 "io_queue_requests": 0, 00:20:16.756 "delay_cmd_submit": true, 00:20:16.756 "transport_retry_count": 4, 00:20:16.756 "bdev_retry_count": 3, 00:20:16.756 "transport_ack_timeout": 0, 00:20:16.756 "ctrlr_loss_timeout_sec": 0, 00:20:16.756 "reconnect_delay_sec": 0, 00:20:16.756 "fast_io_fail_timeout_sec": 0, 00:20:16.756 "disable_auto_failback": false, 00:20:16.756 "generate_uuids": false, 00:20:16.756 "transport_tos": 0, 00:20:16.756 "nvme_error_stat": false, 00:20:16.756 "rdma_srq_size": 0, 00:20:16.756 "io_path_stat": false, 00:20:16.756 "allow_accel_sequence": false, 00:20:16.756 "rdma_max_cq_size": 0, 00:20:16.756 "rdma_cm_event_timeout_ms": 0, 00:20:16.756 "dhchap_digests": [ 00:20:16.756 "sha256", 00:20:16.756 "sha384", 00:20:16.756 "sha512" 00:20:16.756 ], 00:20:16.756 "dhchap_dhgroups": [ 00:20:16.756 "null", 00:20:16.756 "ffdhe2048", 00:20:16.756 "ffdhe3072", 00:20:16.756 "ffdhe4096", 00:20:16.756 "ffdhe6144", 00:20:16.756 "ffdhe8192" 00:20:16.756 ] 00:20:16.756 } 00:20:16.757 }, 00:20:16.757 { 00:20:16.757 "method": "bdev_nvme_set_hotplug", 00:20:16.757 "params": { 00:20:16.757 "period_us": 100000, 00:20:16.757 "enable": false 00:20:16.757 } 00:20:16.757 }, 00:20:16.757 { 00:20:16.757 "method": "bdev_malloc_create", 00:20:16.757 "params": { 00:20:16.757 "name": "malloc0", 00:20:16.757 "num_blocks": 8192, 00:20:16.757 "block_size": 4096, 00:20:16.757 "physical_block_size": 4096, 00:20:16.757 "uuid": "657cafcc-f55c-44dc-babf-3911cfaeec53", 00:20:16.757 "optimal_io_boundary": 0 00:20:16.757 } 00:20:16.757 }, 00:20:16.757 { 00:20:16.757 "method": "bdev_wait_for_examine" 00:20:16.757 } 00:20:16.757 ] 00:20:16.757 }, 00:20:16.757 { 00:20:16.757 "subsystem": "nbd", 00:20:16.757 "config": [] 00:20:16.757 }, 00:20:16.757 { 00:20:16.757 "subsystem": "scheduler", 00:20:16.757 "config": [ 00:20:16.757 { 00:20:16.757 "method": "framework_set_scheduler", 00:20:16.757 "params": { 00:20:16.757 "name": "static" 00:20:16.757 } 00:20:16.757 } 00:20:16.757 ] 00:20:16.757 }, 00:20:16.757 { 00:20:16.757 "subsystem": "nvmf", 00:20:16.757 "config": [ 00:20:16.757 { 00:20:16.757 "method": "nvmf_set_config", 00:20:16.757 "params": { 00:20:16.757 "discovery_filter": "match_any", 00:20:16.757 "admin_cmd_passthru": { 00:20:16.757 "identify_ctrlr": false 00:20:16.757 } 00:20:16.757 } 00:20:16.757 }, 00:20:16.757 { 00:20:16.757 "method": "nvmf_set_max_subsystems", 00:20:16.757 "params": { 00:20:16.757 "max_subsystems": 1024 00:20:16.757 } 00:20:16.757 }, 00:20:16.757 { 00:20:16.757 "method": "nvmf_set_crdt", 00:20:16.757 "params": { 00:20:16.757 "crdt1": 0, 00:20:16.757 "crdt2": 0, 00:20:16.757 "crdt3": 0 00:20:16.757 } 00:20:16.757 }, 00:20:16.757 { 00:20:16.757 "method": "nvmf_create_transport", 00:20:16.757 "params": { 00:20:16.757 "trtype": "TCP", 00:20:16.757 "max_queue_depth": 128, 00:20:16.757 "max_io_qpairs_per_ctrlr": 127, 00:20:16.757 "in_capsule_data_size": 4096, 00:20:16.757 "max_io_size": 131072, 00:20:16.757 "io_unit_size": 131072, 00:20:16.757 "max_aq_depth": 128, 00:20:16.757 "num_shared_buffers": 511, 00:20:16.757 "buf_cache_size": 4294967295, 00:20:16.757 "dif_insert_or_strip": false, 00:20:16.757 "zcopy": false, 00:20:16.757 "c2h_success": false, 00:20:16.757 "sock_priority": 0, 00:20:16.757 "abort_timeout_sec": 1, 00:20:16.757 "ack_timeout": 0, 00:20:16.757 "data_wr_pool_size": 0 00:20:16.757 } 00:20:16.757 }, 00:20:16.757 { 00:20:16.757 "method": "nvmf_create_subsystem", 00:20:16.757 "params": { 00:20:16.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.757 "allow_any_host": false, 00:20:16.757 "serial_number": "00000000000000000000", 00:20:16.757 "model_number": "SPDK bdev Controller", 00:20:16.757 "max_namespaces": 32, 00:20:16.757 "min_cntlid": 1, 00:20:16.757 "max_cntlid": 65519, 00:20:16.757 "ana_reporting": false 00:20:16.757 } 00:20:16.757 }, 00:20:16.757 { 00:20:16.757 "method": "nvmf_subsystem_add_host", 00:20:16.757 "params": { 00:20:16.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.757 "host": "nqn.2016-06.io.spdk:host1", 00:20:16.757 "psk": "key0" 00:20:16.757 } 00:20:16.757 }, 00:20:16.757 { 00:20:16.757 "method": "nvmf_subsystem_add_ns", 00:20:16.757 "params": { 00:20:16.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.757 "namespace": { 00:20:16.757 "nsid": 1, 00:20:16.757 "bdev_name": "malloc0", 00:20:16.757 "nguid": "657CAFCCF55C44DCBABF3911CFAEEC53", 00:20:16.757 "uuid": "657cafcc-f55c-44dc-babf-3911cfaeec53", 00:20:16.757 "no_auto_visible": false 00:20:16.757 } 00:20:16.757 } 00:20:16.757 }, 00:20:16.757 { 00:20:16.757 "method": "nvmf_subsystem_add_listener", 00:20:16.757 "params": { 00:20:16.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.757 "listen_address": { 00:20:16.757 "trtype": "TCP", 00:20:16.757 "adrfam": "IPv4", 00:20:16.757 "traddr": "10.0.0.2", 00:20:16.757 "trsvcid": "4420" 00:20:16.757 }, 00:20:16.757 "secure_channel": true 00:20:16.757 } 00:20:16.757 } 00:20:16.757 ] 00:20:16.757 } 00:20:16.757 ] 00:20:16.757 }' 00:20:16.757 19:12:19 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:17.017 19:12:19 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:17.017 "subsystems": [ 00:20:17.017 { 00:20:17.017 "subsystem": "keyring", 00:20:17.017 "config": [ 00:20:17.017 { 00:20:17.017 "method": "keyring_file_add_key", 00:20:17.017 "params": { 00:20:17.017 "name": "key0", 00:20:17.017 "path": "/tmp/tmp.gE3qqbMaoL" 00:20:17.017 } 00:20:17.017 } 00:20:17.017 ] 00:20:17.017 }, 00:20:17.017 { 00:20:17.017 "subsystem": "iobuf", 00:20:17.017 "config": [ 00:20:17.017 { 00:20:17.017 "method": "iobuf_set_options", 00:20:17.017 "params": { 00:20:17.017 "small_pool_count": 8192, 00:20:17.017 "large_pool_count": 1024, 00:20:17.017 "small_bufsize": 8192, 00:20:17.017 "large_bufsize": 135168 00:20:17.017 } 00:20:17.017 } 00:20:17.017 ] 00:20:17.017 }, 00:20:17.017 { 00:20:17.017 "subsystem": "sock", 00:20:17.017 "config": [ 00:20:17.017 { 00:20:17.017 "method": "sock_set_default_impl", 00:20:17.017 "params": { 00:20:17.017 "impl_name": "posix" 00:20:17.017 } 00:20:17.017 }, 00:20:17.017 { 00:20:17.017 "method": "sock_impl_set_options", 00:20:17.017 "params": { 00:20:17.017 "impl_name": "ssl", 00:20:17.017 "recv_buf_size": 4096, 00:20:17.017 "send_buf_size": 4096, 00:20:17.017 "enable_recv_pipe": true, 00:20:17.017 "enable_quickack": false, 00:20:17.017 "enable_placement_id": 0, 00:20:17.017 "enable_zerocopy_send_server": true, 00:20:17.017 "enable_zerocopy_send_client": false, 00:20:17.017 "zerocopy_threshold": 0, 00:20:17.017 "tls_version": 0, 00:20:17.017 "enable_ktls": false 00:20:17.017 } 00:20:17.017 }, 00:20:17.017 { 00:20:17.017 "method": "sock_impl_set_options", 00:20:17.017 "params": { 00:20:17.017 "impl_name": "posix", 00:20:17.017 "recv_buf_size": 2097152, 00:20:17.017 "send_buf_size": 2097152, 00:20:17.017 "enable_recv_pipe": true, 00:20:17.017 "enable_quickack": false, 00:20:17.017 "enable_placement_id": 0, 00:20:17.017 "enable_zerocopy_send_server": true, 00:20:17.017 "enable_zerocopy_send_client": false, 00:20:17.017 "zerocopy_threshold": 0, 00:20:17.017 "tls_version": 0, 00:20:17.017 "enable_ktls": false 00:20:17.017 } 00:20:17.017 } 00:20:17.017 ] 00:20:17.017 }, 00:20:17.017 { 00:20:17.017 "subsystem": "vmd", 00:20:17.017 "config": [] 00:20:17.017 }, 00:20:17.017 { 00:20:17.017 "subsystem": "accel", 00:20:17.017 "config": [ 00:20:17.017 { 00:20:17.017 "method": "accel_set_options", 00:20:17.017 "params": { 00:20:17.017 "small_cache_size": 128, 00:20:17.017 "large_cache_size": 16, 00:20:17.017 "task_count": 2048, 00:20:17.017 "sequence_count": 2048, 00:20:17.017 "buf_count": 2048 00:20:17.017 } 00:20:17.017 } 00:20:17.017 ] 00:20:17.017 }, 00:20:17.017 { 00:20:17.017 "subsystem": "bdev", 00:20:17.017 "config": [ 00:20:17.017 { 00:20:17.017 "method": "bdev_set_options", 00:20:17.017 "params": { 00:20:17.017 "bdev_io_pool_size": 65535, 00:20:17.017 "bdev_io_cache_size": 256, 00:20:17.017 "bdev_auto_examine": true, 00:20:17.017 "iobuf_small_cache_size": 128, 00:20:17.017 "iobuf_large_cache_size": 16 00:20:17.017 } 00:20:17.017 }, 00:20:17.017 { 00:20:17.017 "method": "bdev_raid_set_options", 00:20:17.017 "params": { 00:20:17.017 "process_window_size_kb": 1024 00:20:17.017 } 00:20:17.017 }, 00:20:17.017 { 00:20:17.017 "method": "bdev_iscsi_set_options", 00:20:17.017 "params": { 00:20:17.017 "timeout_sec": 30 00:20:17.017 } 00:20:17.017 }, 00:20:17.017 { 00:20:17.017 "method": "bdev_nvme_set_options", 00:20:17.017 "params": { 00:20:17.017 "action_on_timeout": "none", 00:20:17.017 "timeout_us": 0, 00:20:17.017 "timeout_admin_us": 0, 00:20:17.017 "keep_alive_timeout_ms": 10000, 00:20:17.018 "arbitration_burst": 0, 00:20:17.018 "low_priority_weight": 0, 00:20:17.018 "medium_priority_weight": 0, 00:20:17.018 "high_priority_weight": 0, 00:20:17.018 "nvme_adminq_poll_period_us": 10000, 00:20:17.018 "nvme_ioq_poll_period_us": 0, 00:20:17.018 "io_queue_requests": 512, 00:20:17.018 "delay_cmd_submit": true, 00:20:17.018 "transport_retry_count": 4, 00:20:17.018 "bdev_retry_count": 3, 00:20:17.018 "transport_ack_timeout": 0, 00:20:17.018 "ctrlr_loss_timeout_sec": 0, 00:20:17.018 "reconnect_delay_sec": 0, 00:20:17.018 "fast_io_fail_timeout_sec": 0, 00:20:17.018 "disable_auto_failback": false, 00:20:17.018 "generate_uuids": false, 00:20:17.018 "transport_tos": 0, 00:20:17.018 "nvme_error_stat": false, 00:20:17.018 "rdma_srq_size": 0, 00:20:17.018 "io_path_stat": false, 00:20:17.018 "allow_accel_sequence": false, 00:20:17.018 "rdma_max_cq_size": 0, 00:20:17.018 "rdma_cm_event_timeout_ms": 0, 00:20:17.018 "dhchap_digests": [ 00:20:17.018 "sha256", 00:20:17.018 "sha384", 00:20:17.018 "sha512" 00:20:17.018 ], 00:20:17.018 "dhchap_dhgroups": [ 00:20:17.018 "null", 00:20:17.018 "ffdhe2048", 00:20:17.018 "ffdhe3072", 00:20:17.018 "ffdhe4096", 00:20:17.018 "ffdhe6144", 00:20:17.018 "ffdhe8192" 00:20:17.018 ] 00:20:17.018 } 00:20:17.018 }, 00:20:17.018 { 00:20:17.018 "method": "bdev_nvme_attach_controller", 00:20:17.018 "params": { 00:20:17.018 "name": "nvme0", 00:20:17.018 "trtype": "TCP", 00:20:17.018 "adrfam": "IPv4", 00:20:17.018 "traddr": "10.0.0.2", 00:20:17.018 "trsvcid": "4420", 00:20:17.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.018 "prchk_reftag": false, 00:20:17.018 "prchk_guard": false, 00:20:17.018 "ctrlr_loss_timeout_sec": 0, 00:20:17.018 "reconnect_delay_sec": 0, 00:20:17.018 "fast_io_fail_timeout_sec": 0, 00:20:17.018 "psk": "key0", 00:20:17.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.018 "hdgst": false, 00:20:17.018 "ddgst": false 00:20:17.018 } 00:20:17.018 }, 00:20:17.018 { 00:20:17.018 "method": "bdev_nvme_set_hotplug", 00:20:17.018 "params": { 00:20:17.018 "period_us": 100000, 00:20:17.018 "enable": false 00:20:17.018 } 00:20:17.018 }, 00:20:17.018 { 00:20:17.018 "method": "bdev_enable_histogram", 00:20:17.018 "params": { 00:20:17.018 "name": "nvme0n1", 00:20:17.018 "enable": true 00:20:17.018 } 00:20:17.018 }, 00:20:17.018 { 00:20:17.018 "method": "bdev_wait_for_examine" 00:20:17.018 } 00:20:17.018 ] 00:20:17.018 }, 00:20:17.018 { 00:20:17.018 "subsystem": "nbd", 00:20:17.018 "config": [] 00:20:17.018 } 00:20:17.018 ] 00:20:17.018 }' 00:20:17.018 19:12:19 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 341653 00:20:17.018 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 341653 ']' 00:20:17.018 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 341653 00:20:17.018 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:17.018 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:17.018 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 341653 00:20:17.018 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:17.018 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:17.018 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 341653' 00:20:17.018 killing process with pid 341653 00:20:17.018 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 341653 00:20:17.018 Received shutdown signal, test time was about 1.000000 seconds 00:20:17.018 00:20:17.018 Latency(us) 00:20:17.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.018 =================================================================================================================== 00:20:17.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.018 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 341653 00:20:17.278 19:12:19 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 341556 00:20:17.278 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 341556 ']' 00:20:17.278 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 341556 00:20:17.278 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:17.278 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:17.278 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 341556 00:20:17.278 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:17.278 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:17.279 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 341556' 00:20:17.279 killing process with pid 341556 00:20:17.279 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 341556 00:20:17.279 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 341556 00:20:17.279 19:12:19 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:17.279 19:12:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.279 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:17.279 19:12:19 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:17.279 "subsystems": [ 00:20:17.279 { 00:20:17.279 "subsystem": "keyring", 00:20:17.279 "config": [ 00:20:17.279 { 00:20:17.279 "method": "keyring_file_add_key", 00:20:17.279 "params": { 00:20:17.279 "name": "key0", 00:20:17.279 "path": "/tmp/tmp.gE3qqbMaoL" 00:20:17.279 } 00:20:17.279 } 00:20:17.279 ] 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "subsystem": "iobuf", 00:20:17.279 "config": [ 00:20:17.279 { 00:20:17.279 "method": "iobuf_set_options", 00:20:17.279 "params": { 00:20:17.279 "small_pool_count": 8192, 00:20:17.279 "large_pool_count": 1024, 00:20:17.279 "small_bufsize": 8192, 00:20:17.279 "large_bufsize": 135168 00:20:17.279 } 00:20:17.279 } 00:20:17.279 ] 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "subsystem": "sock", 00:20:17.279 "config": [ 00:20:17.279 { 00:20:17.279 "method": "sock_set_default_impl", 00:20:17.279 "params": { 00:20:17.279 "impl_name": "posix" 00:20:17.279 } 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "method": "sock_impl_set_options", 00:20:17.279 "params": { 00:20:17.279 "impl_name": "ssl", 00:20:17.279 "recv_buf_size": 4096, 00:20:17.279 "send_buf_size": 4096, 00:20:17.279 "enable_recv_pipe": true, 00:20:17.279 "enable_quickack": false, 00:20:17.279 "enable_placement_id": 0, 00:20:17.279 "enable_zerocopy_send_server": true, 00:20:17.279 "enable_zerocopy_send_client": false, 00:20:17.279 "zerocopy_threshold": 0, 00:20:17.279 "tls_version": 0, 00:20:17.279 "enable_ktls": false 00:20:17.279 } 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "method": "sock_impl_set_options", 00:20:17.279 "params": { 00:20:17.279 "impl_name": "posix", 00:20:17.279 "recv_buf_size": 2097152, 00:20:17.279 "send_buf_size": 2097152, 00:20:17.279 "enable_recv_pipe": true, 00:20:17.279 "enable_quickack": false, 00:20:17.279 "enable_placement_id": 0, 00:20:17.279 "enable_zerocopy_send_server": true, 00:20:17.279 "enable_zerocopy_send_client": false, 00:20:17.279 "zerocopy_threshold": 0, 00:20:17.279 "tls_version": 0, 00:20:17.279 "enable_ktls": false 00:20:17.279 } 00:20:17.279 } 00:20:17.279 ] 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "subsystem": "vmd", 00:20:17.279 "config": [] 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "subsystem": "accel", 00:20:17.279 "config": [ 00:20:17.279 { 00:20:17.279 "method": "accel_set_options", 00:20:17.279 "params": { 00:20:17.279 "small_cache_size": 128, 00:20:17.279 "large_cache_size": 16, 00:20:17.279 "task_count": 2048, 00:20:17.279 "sequence_count": 2048, 00:20:17.279 "buf_count": 2048 00:20:17.279 } 00:20:17.279 } 00:20:17.279 ] 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "subsystem": "bdev", 00:20:17.279 "config": [ 00:20:17.279 { 00:20:17.279 "method": "bdev_set_options", 00:20:17.279 "params": { 00:20:17.279 "bdev_io_pool_size": 65535, 00:20:17.279 "bdev_io_cache_size": 256, 00:20:17.279 "bdev_auto_examine": true, 00:20:17.279 "iobuf_small_cache_size": 128, 00:20:17.279 "iobuf_large_cache_size": 16 00:20:17.279 } 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "method": "bdev_raid_set_options", 00:20:17.279 "params": { 00:20:17.279 "process_window_size_kb": 1024 00:20:17.279 } 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "method": "bdev_iscsi_set_options", 00:20:17.279 "params": { 00:20:17.279 "timeout_sec": 30 00:20:17.279 } 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "method": "bdev_nvme_set_options", 00:20:17.279 "params": { 00:20:17.279 "action_on_timeout": "none", 00:20:17.279 "timeout_us": 0, 00:20:17.279 "timeout_admin_us": 0, 00:20:17.279 "keep_alive_timeout_ms": 10000, 00:20:17.279 "arbitration_burst": 0, 00:20:17.279 "low_priority_weight": 0, 00:20:17.279 "medium_priority_weight": 0, 00:20:17.279 "high_priority_weight": 0, 00:20:17.279 "nvme_adminq_poll_period_us": 10000, 00:20:17.279 "nvme_ioq_poll_period_us": 0, 00:20:17.279 "io_queue_requests": 0, 00:20:17.279 "delay_cmd_submit": true, 00:20:17.279 "transport_retry_count": 4, 00:20:17.279 "bdev_retry_count": 3, 00:20:17.279 "transport_ack_timeout": 0, 00:20:17.279 "ctrlr_loss_timeout_sec": 0, 00:20:17.279 "reconnect_delay_sec": 0, 00:20:17.279 "fast_io_fail_timeout_sec": 0, 00:20:17.279 "disable_auto_failback": false, 00:20:17.279 "generate_uuids": false, 00:20:17.279 "transport_tos": 0, 00:20:17.279 "nvme_error_stat": false, 00:20:17.279 "rdma_srq_size": 0, 00:20:17.279 "io_path_stat": false, 00:20:17.279 "allow_accel_sequence": false, 00:20:17.279 "rdma_max_cq_size": 0, 00:20:17.279 "rdma_cm_event_timeout_ms": 0, 00:20:17.279 "dhchap_digests": [ 00:20:17.279 "sha256", 00:20:17.279 "sha384", 00:20:17.279 "sha512" 00:20:17.279 ], 00:20:17.279 "dhchap_dhgroups": [ 00:20:17.279 "null", 00:20:17.279 "ffdhe2048", 00:20:17.279 "ffdhe3072", 00:20:17.279 "ffdhe4096", 00:20:17.279 "ffdhe6144", 00:20:17.279 "ffdhe8192" 00:20:17.279 ] 00:20:17.279 } 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "method": "bdev_nvme_set_hotplug", 00:20:17.279 "params": { 00:20:17.279 "period_us": 100000, 00:20:17.279 "enable": false 00:20:17.279 } 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "method": "bdev_malloc_create", 00:20:17.279 "params": { 00:20:17.279 "name": "malloc0", 00:20:17.279 "num_blocks": 8192, 00:20:17.279 "block_size": 4096, 00:20:17.279 "physical_block_size": 4096, 00:20:17.279 "uuid": "657cafcc-f55c-44dc-babf-3911cfaeec53", 00:20:17.279 "optimal_io_boundary": 0 00:20:17.279 } 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "method": "bdev_wait_for_examine" 00:20:17.279 } 00:20:17.279 ] 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "subsystem": "nbd", 00:20:17.279 "config": [] 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "subsystem": "scheduler", 00:20:17.279 "config": [ 00:20:17.279 { 00:20:17.279 "method": "framework_set_scheduler", 00:20:17.279 "params": { 00:20:17.279 "name": "static" 00:20:17.279 } 00:20:17.279 } 00:20:17.279 ] 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "subsystem": "nvmf", 00:20:17.279 "config": [ 00:20:17.279 { 00:20:17.279 "method": "nvmf_set_config", 00:20:17.279 "params": { 00:20:17.279 "discovery_filter": "match_any", 00:20:17.279 "admin_cmd_passthru": { 00:20:17.279 "identify_ctrlr": false 00:20:17.279 } 00:20:17.279 } 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "method": "nvmf_set_max_subsystems", 00:20:17.279 "params": { 00:20:17.279 "max_subsystems": 1024 00:20:17.279 } 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "method": "nvmf_set_crdt", 00:20:17.279 "params": { 00:20:17.279 "crdt1": 0, 00:20:17.279 "crdt2": 0, 00:20:17.279 "crdt3": 0 00:20:17.279 } 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "method": "nvmf_create_transport", 00:20:17.279 "params": { 00:20:17.279 "trtype": "TCP", 00:20:17.279 "max_queue_depth": 128, 00:20:17.279 "max_io_qpairs_per_ctrlr": 127, 00:20:17.279 "in_capsule_data_size": 4096, 00:20:17.279 "max_io_size": 131072, 00:20:17.279 "io_unit_size": 131072, 00:20:17.279 "max_aq_depth": 128, 00:20:17.279 "num_shared_buffers": 511, 00:20:17.279 "buf_cache_size": 4294967295, 00:20:17.279 "dif_insert_or_strip": false, 00:20:17.279 "zcopy": false, 00:20:17.279 "c2h_success": false, 00:20:17.279 "sock_priority": 0, 00:20:17.279 "abort_timeout_sec": 1, 00:20:17.279 "ack_timeout": 0, 00:20:17.279 "data_wr_pool_size": 0 00:20:17.279 } 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "method": "nvmf_create_subsystem", 00:20:17.279 "params": { 00:20:17.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.279 "allow_any_host": false, 00:20:17.279 "serial_number": "00000000000000000000", 00:20:17.279 "model_number": "SPDK bdev Controller", 00:20:17.279 "max_namespaces": 32, 00:20:17.279 "min_cntlid": 1, 00:20:17.279 "max_cntlid": 65519, 00:20:17.279 "ana_reporting": false 00:20:17.279 } 00:20:17.279 }, 00:20:17.279 { 00:20:17.279 "method": "nvmf_subsystem_add_host", 00:20:17.279 "params": { 00:20:17.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.279 "host": "nqn.2016-06.io.spdk:host1", 00:20:17.280 "psk": "key0" 00:20:17.280 } 00:20:17.280 }, 00:20:17.280 { 00:20:17.280 "method": "nvmf_subsystem_add_ns", 00:20:17.280 "params": { 00:20:17.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.280 "namespace": { 00:20:17.280 "nsid": 1, 00:20:17.280 "bdev_name": "malloc0", 00:20:17.280 "nguid": "657CAFCCF55C44DCBABF3911CFAEEC53", 00:20:17.280 "uuid": "657cafcc-f55c-44dc-babf-3911cfaeec53", 00:20:17.280 "no_auto_visible": false 00:20:17.280 } 00:20:17.280 } 00:20:17.280 }, 00:20:17.280 { 00:20:17.280 "method": "nvmf_subsystem_add_listener", 00:20:17.280 "params": { 00:20:17.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.280 "listen_address": { 00:20:17.280 "trtype": "TCP", 00:20:17.280 "adrfam": "IPv4", 00:20:17.280 "traddr": "10.0.0.2", 00:20:17.280 "trsvcid": "4420" 00:20:17.280 }, 00:20:17.280 "secure_channel": true 00:20:17.280 } 00:20:17.280 } 00:20:17.280 ] 00:20:17.280 } 00:20:17.280 ] 00:20:17.280 }' 00:20:17.280 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.280 19:12:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=342287 00:20:17.280 19:12:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 342287 00:20:17.280 19:12:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:17.280 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 342287 ']' 00:20:17.280 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.280 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.280 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.280 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.280 19:12:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.540 [2024-07-12 19:12:19.888504] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:20:17.540 [2024-07-12 19:12:19.888548] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.540 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.540 [2024-07-12 19:12:19.955753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.540 [2024-07-12 19:12:20.041612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.540 [2024-07-12 19:12:20.041646] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.540 [2024-07-12 19:12:20.041653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.540 [2024-07-12 19:12:20.041660] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.540 [2024-07-12 19:12:20.041665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.540 [2024-07-12 19:12:20.041708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.799 [2024-07-12 19:12:20.251330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.799 [2024-07-12 19:12:20.283349] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.799 [2024-07-12 19:12:20.291446] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.369 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.369 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:18.369 19:12:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.369 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:18.369 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.369 19:12:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.369 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=342326 00:20:18.369 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 342326 /var/tmp/bdevperf.sock 00:20:18.369 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 342326 ']' 00:20:18.369 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.369 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:18.369 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.370 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.370 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:18.370 "subsystems": [ 00:20:18.370 { 00:20:18.370 "subsystem": "keyring", 00:20:18.370 "config": [ 00:20:18.370 { 00:20:18.370 "method": "keyring_file_add_key", 00:20:18.370 "params": { 00:20:18.370 "name": "key0", 00:20:18.370 "path": "/tmp/tmp.gE3qqbMaoL" 00:20:18.370 } 00:20:18.370 } 00:20:18.370 ] 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "subsystem": "iobuf", 00:20:18.370 "config": [ 00:20:18.370 { 00:20:18.370 "method": "iobuf_set_options", 00:20:18.370 "params": { 00:20:18.370 "small_pool_count": 8192, 00:20:18.370 "large_pool_count": 1024, 00:20:18.370 "small_bufsize": 8192, 00:20:18.370 "large_bufsize": 135168 00:20:18.370 } 00:20:18.370 } 00:20:18.370 ] 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "subsystem": "sock", 00:20:18.370 "config": [ 00:20:18.370 { 00:20:18.370 "method": "sock_set_default_impl", 00:20:18.370 "params": { 00:20:18.370 "impl_name": "posix" 00:20:18.370 } 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "method": "sock_impl_set_options", 00:20:18.370 "params": { 00:20:18.370 "impl_name": "ssl", 00:20:18.370 "recv_buf_size": 4096, 00:20:18.370 "send_buf_size": 4096, 00:20:18.370 "enable_recv_pipe": true, 00:20:18.370 "enable_quickack": false, 00:20:18.370 "enable_placement_id": 0, 00:20:18.370 "enable_zerocopy_send_server": true, 00:20:18.370 "enable_zerocopy_send_client": false, 00:20:18.370 "zerocopy_threshold": 0, 00:20:18.370 "tls_version": 0, 00:20:18.370 "enable_ktls": false 00:20:18.370 } 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "method": "sock_impl_set_options", 00:20:18.370 "params": { 00:20:18.370 "impl_name": "posix", 00:20:18.370 "recv_buf_size": 2097152, 00:20:18.370 "send_buf_size": 2097152, 00:20:18.370 "enable_recv_pipe": true, 00:20:18.370 "enable_quickack": false, 00:20:18.370 "enable_placement_id": 0, 00:20:18.370 "enable_zerocopy_send_server": true, 00:20:18.370 "enable_zerocopy_send_client": false, 00:20:18.370 "zerocopy_threshold": 0, 00:20:18.370 "tls_version": 0, 00:20:18.370 "enable_ktls": false 00:20:18.370 } 00:20:18.370 } 00:20:18.370 ] 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "subsystem": "vmd", 00:20:18.370 "config": [] 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "subsystem": "accel", 00:20:18.370 "config": [ 00:20:18.370 { 00:20:18.370 "method": "accel_set_options", 00:20:18.370 "params": { 00:20:18.370 "small_cache_size": 128, 00:20:18.370 "large_cache_size": 16, 00:20:18.370 "task_count": 2048, 00:20:18.370 "sequence_count": 2048, 00:20:18.370 "buf_count": 2048 00:20:18.370 } 00:20:18.370 } 00:20:18.370 ] 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "subsystem": "bdev", 00:20:18.370 "config": [ 00:20:18.370 { 00:20:18.370 "method": "bdev_set_options", 00:20:18.370 "params": { 00:20:18.370 "bdev_io_pool_size": 65535, 00:20:18.370 "bdev_io_cache_size": 256, 00:20:18.370 "bdev_auto_examine": true, 00:20:18.370 "iobuf_small_cache_size": 128, 00:20:18.370 "iobuf_large_cache_size": 16 00:20:18.370 } 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "method": "bdev_raid_set_options", 00:20:18.370 "params": { 00:20:18.370 "process_window_size_kb": 1024 00:20:18.370 } 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "method": "bdev_iscsi_set_options", 00:20:18.370 "params": { 00:20:18.370 "timeout_sec": 30 00:20:18.370 } 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "method": "bdev_nvme_set_options", 00:20:18.370 "params": { 00:20:18.370 "action_on_timeout": "none", 00:20:18.370 "timeout_us": 0, 00:20:18.370 "timeout_admin_us": 0, 00:20:18.370 "keep_alive_timeout_ms": 10000, 00:20:18.370 "arbitration_burst": 0, 00:20:18.370 "low_priority_weight": 0, 00:20:18.370 "medium_priority_weight": 0, 00:20:18.370 "high_priority_weight": 0, 00:20:18.370 "nvme_adminq_poll_period_us": 10000, 00:20:18.370 "nvme_ioq_poll_period_us": 0, 00:20:18.370 "io_queue_requests": 512, 00:20:18.370 "delay_cmd_submit": true, 00:20:18.370 "transport_retry_count": 4, 00:20:18.370 "bdev_retry_count": 3, 00:20:18.370 "transport_ack_timeout": 0, 00:20:18.370 "ctrlr_loss_timeout_sec": 0, 00:20:18.370 "reconnect_delay_sec": 0, 00:20:18.370 "fast_io_fail_timeout_sec": 0, 00:20:18.370 "disable_auto_failback": false, 00:20:18.370 "generate_uuids": false, 00:20:18.370 "transport_tos": 0, 00:20:18.370 "nvme_error_stat": false, 00:20:18.370 "rdma_srq_size": 0, 00:20:18.370 "io_path_stat": false, 00:20:18.370 "allow_accel_sequence": false, 00:20:18.370 "rdma_max_cq_size": 0, 00:20:18.370 "rdma_cm_event_timeout_ms": 0, 00:20:18.370 "dhchap_digests": [ 00:20:18.370 "sha256", 00:20:18.370 "sha384", 00:20:18.370 "sha512" 00:20:18.370 ], 00:20:18.370 "dhchap_dhgroups": [ 00:20:18.370 "null", 00:20:18.370 "ffdhe2048", 00:20:18.370 "ffdhe3072", 00:20:18.370 "ffdhe4096", 00:20:18.370 "ffdhe6144", 00:20:18.370 "ffdhe8192" 00:20:18.370 ] 00:20:18.370 } 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "method": "bdev_nvme_attach_controller", 00:20:18.370 "params": { 00:20:18.370 "name": "nvme0", 00:20:18.370 "trtype": "TCP", 00:20:18.370 "adrfam": "IPv4", 00:20:18.370 "traddr": "10.0.0.2", 00:20:18.370 "trsvcid": "4420", 00:20:18.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.370 "prchk_reftag": false, 00:20:18.370 "prchk_guard": false, 00:20:18.370 "ctrlr_loss_timeout_sec": 0, 00:20:18.370 "reconnect_delay_sec": 0, 00:20:18.370 "fast_io_fail_timeout_sec": 0, 00:20:18.370 "psk": "key0", 00:20:18.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.370 "hdgst": false, 00:20:18.370 "ddgst": false 00:20:18.370 } 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "method": "bdev_nvme_set_hotplug", 00:20:18.370 "params": { 00:20:18.370 "period_us": 100000, 00:20:18.370 "enable": false 00:20:18.370 } 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "method": "bdev_enable_histogram", 00:20:18.370 "params": { 00:20:18.370 "name": "nvme0n1", 00:20:18.370 "enable": true 00:20:18.370 } 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "method": "bdev_wait_for_examine" 00:20:18.370 } 00:20:18.370 ] 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "subsystem": "nbd", 00:20:18.370 "config": [] 00:20:18.370 } 00:20:18.370 ] 00:20:18.370 }' 00:20:18.370 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.370 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.370 [2024-07-12 19:12:20.774441] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:20:18.370 [2024-07-12 19:12:20.774489] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid342326 ] 00:20:18.370 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.370 [2024-07-12 19:12:20.842813] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.370 [2024-07-12 19:12:20.923407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.630 [2024-07-12 19:12:21.075127] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.200 19:12:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.200 19:12:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:19.200 19:12:21 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:19.200 19:12:21 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:19.460 19:12:21 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.460 19:12:21 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:19.460 Running I/O for 1 seconds... 00:20:20.402 00:20:20.403 Latency(us) 00:20:20.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.403 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:20.403 Verification LBA range: start 0x0 length 0x2000 00:20:20.403 nvme0n1 : 1.01 5397.52 21.08 0.00 0.00 23549.87 5128.90 33508.84 00:20:20.403 =================================================================================================================== 00:20:20.403 Total : 5397.52 21.08 0.00 0.00 23549.87 5128.90 33508.84 00:20:20.403 0 00:20:20.403 19:12:22 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:20.403 19:12:22 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:20.403 19:12:22 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:20.403 19:12:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:20:20.403 19:12:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:20:20.403 19:12:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:20.403 19:12:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:20.403 19:12:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:20.403 19:12:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:20.403 19:12:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:20.403 19:12:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:20.403 nvmf_trace.0 00:20:20.666 19:12:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:20:20.666 19:12:22 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 342326 00:20:20.666 19:12:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 342326 ']' 00:20:20.666 19:12:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 342326 00:20:20.666 19:12:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:20.666 19:12:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:20.666 19:12:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 342326 00:20:20.666 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:20.666 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:20.666 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 342326' 00:20:20.666 killing process with pid 342326 00:20:20.666 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 342326 00:20:20.666 Received shutdown signal, test time was about 1.000000 seconds 00:20:20.666 00:20:20.666 Latency(us) 00:20:20.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.666 =================================================================================================================== 00:20:20.666 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.666 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 342326 00:20:20.666 19:12:23 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:20.666 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:20.666 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:20.666 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:20.666 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:20.666 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:20.666 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:20.666 rmmod nvme_tcp 00:20:20.666 rmmod nvme_fabrics 00:20:20.926 rmmod nvme_keyring 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 342287 ']' 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 342287 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 342287 ']' 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 342287 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 342287 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 342287' 00:20:20.926 killing process with pid 342287 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 342287 00:20:20.926 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 342287 00:20:21.185 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:21.185 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:21.185 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:21.185 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.185 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:21.185 19:12:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.185 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.185 19:12:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.094 19:12:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:23.094 19:12:25 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.itO0seXiIb /tmp/tmp.0624C3pMjF /tmp/tmp.gE3qqbMaoL 00:20:23.094 00:20:23.094 real 1m25.356s 00:20:23.094 user 2m11.769s 00:20:23.094 sys 0m29.168s 00:20:23.094 19:12:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:23.094 19:12:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.094 ************************************ 00:20:23.094 END TEST nvmf_tls 00:20:23.094 ************************************ 00:20:23.094 19:12:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:23.094 19:12:25 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:23.094 19:12:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:23.094 19:12:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:23.094 19:12:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:23.094 ************************************ 00:20:23.094 START TEST nvmf_fips 00:20:23.094 ************************************ 00:20:23.094 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:23.354 * Looking for test storage... 00:20:23.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.354 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:23.355 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:23.616 Error setting digest 00:20:23.616 00B27939157F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:23.616 00B27939157F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.616 19:12:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.616 19:12:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:23.616 19:12:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:23.616 19:12:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:23.616 19:12:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:28.898 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.898 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.898 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:28.899 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:28.899 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:28.899 Found net devices under 0000:86:00.0: cvl_0_0 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:28.899 Found net devices under 0000:86:00.1: cvl_0_1 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.899 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:29.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:20:29.159 00:20:29.159 --- 10.0.0.2 ping statistics --- 00:20:29.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.159 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:20:29.159 00:20:29.159 --- 10.0.0.1 ping statistics --- 00:20:29.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.159 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=346333 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 346333 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 346333 ']' 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.159 19:12:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:29.419 [2024-07-12 19:12:31.776740] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:20:29.419 [2024-07-12 19:12:31.776787] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.419 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.419 [2024-07-12 19:12:31.843745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.419 [2024-07-12 19:12:31.919934] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.419 [2024-07-12 19:12:31.919966] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.419 [2024-07-12 19:12:31.919973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.419 [2024-07-12 19:12:31.919978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.419 [2024-07-12 19:12:31.919983] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.419 [2024-07-12 19:12:31.920019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:30.359 [2024-07-12 19:12:32.757983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.359 [2024-07-12 19:12:32.773985] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:30.359 [2024-07-12 19:12:32.774119] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.359 [2024-07-12 19:12:32.802101] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:30.359 malloc0 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=346578 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 346578 /var/tmp/bdevperf.sock 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 346578 ']' 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.359 19:12:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:30.359 [2024-07-12 19:12:32.894798] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:20:30.359 [2024-07-12 19:12:32.894846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346578 ] 00:20:30.359 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.618 [2024-07-12 19:12:32.961113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.618 [2024-07-12 19:12:33.034111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.186 19:12:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.186 19:12:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:31.186 19:12:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:31.445 [2024-07-12 19:12:33.852492] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.445 [2024-07-12 19:12:33.852583] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:31.445 TLSTESTn1 00:20:31.445 19:12:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:31.704 Running I/O for 10 seconds... 00:20:41.751 00:20:41.751 Latency(us) 00:20:41.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.751 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:41.751 Verification LBA range: start 0x0 length 0x2000 00:20:41.751 TLSTESTn1 : 10.03 5161.84 20.16 0.00 0.00 24750.54 5043.42 31913.18 00:20:41.751 =================================================================================================================== 00:20:41.751 Total : 5161.84 20.16 0.00 0.00 24750.54 5043.42 31913.18 00:20:41.751 0 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:41.751 nvmf_trace.0 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 346578 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 346578 ']' 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 346578 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 346578 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 346578' 00:20:41.751 killing process with pid 346578 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 346578 00:20:41.751 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.751 00:20:41.751 Latency(us) 00:20:41.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.751 =================================================================================================================== 00:20:41.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:41.751 [2024-07-12 19:12:44.228233] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:41.751 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 346578 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.011 rmmod nvme_tcp 00:20:42.011 rmmod nvme_fabrics 00:20:42.011 rmmod nvme_keyring 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 346333 ']' 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 346333 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 346333 ']' 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 346333 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 346333 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 346333' 00:20:42.011 killing process with pid 346333 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 346333 00:20:42.011 [2024-07-12 19:12:44.511457] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:42.011 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 346333 00:20:42.271 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:42.271 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:42.271 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:42.271 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.271 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:42.271 19:12:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.271 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.271 19:12:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.811 19:12:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:44.811 19:12:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:44.811 00:20:44.811 real 0m21.122s 00:20:44.811 user 0m22.479s 00:20:44.811 sys 0m9.518s 00:20:44.811 19:12:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:44.811 19:12:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:44.811 ************************************ 00:20:44.811 END TEST nvmf_fips 00:20:44.811 ************************************ 00:20:44.811 19:12:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:44.811 19:12:46 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:44.811 19:12:46 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:44.811 19:12:46 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:44.811 19:12:46 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:44.811 19:12:46 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:44.811 19:12:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:50.090 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:50.090 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:50.090 Found net devices under 0000:86:00.0: cvl_0_0 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:50.090 Found net devices under 0000:86:00.1: cvl_0_1 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:20:50.090 19:12:52 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:50.090 19:12:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:50.090 19:12:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.090 19:12:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:50.090 ************************************ 00:20:50.090 START TEST nvmf_perf_adq 00:20:50.090 ************************************ 00:20:50.090 19:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:50.090 * Looking for test storage... 00:20:50.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:50.090 19:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.090 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:50.090 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.090 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.090 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.090 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.090 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.090 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.090 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.090 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.090 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.090 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.090 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:50.091 19:12:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:55.371 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:55.371 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:55.372 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:55.372 Found net devices under 0000:86:00.0: cvl_0_0 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:55.372 Found net devices under 0000:86:00.1: cvl_0_1 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:55.372 19:12:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:56.753 19:12:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:00.047 19:13:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.328 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:05.328 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:05.329 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:05.329 Found net devices under 0000:86:00.0: cvl_0_0 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:05.329 Found net devices under 0000:86:00.1: cvl_0_1 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:05.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:21:05.329 00:21:05.329 --- 10.0.0.2 ping statistics --- 00:21:05.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.329 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:21:05.329 00:21:05.329 --- 10.0.0.1 ping statistics --- 00:21:05.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.329 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=356497 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 356497 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 356497 ']' 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.329 19:13:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.329 [2024-07-12 19:13:07.451630] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:21:05.329 [2024-07-12 19:13:07.451670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.329 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.329 [2024-07-12 19:13:07.523319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:05.329 [2024-07-12 19:13:07.596831] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.329 [2024-07-12 19:13:07.596874] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.329 [2024-07-12 19:13:07.596880] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.329 [2024-07-12 19:13:07.596886] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.329 [2024-07-12 19:13:07.596891] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.329 [2024-07-12 19:13:07.597007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.329 [2024-07-12 19:13:07.597051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.329 [2024-07-12 19:13:07.597134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.329 [2024-07-12 19:13:07.597135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:05.898 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.899 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.899 [2024-07-12 19:13:08.452678] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.899 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.899 19:13:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:05.899 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.899 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.157 Malloc1 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.157 [2024-07-12 19:13:08.500442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=356750 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:06.157 19:13:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:06.157 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.061 19:13:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:08.062 19:13:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.062 19:13:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.062 19:13:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.062 19:13:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:08.062 "tick_rate": 2300000000, 00:21:08.062 "poll_groups": [ 00:21:08.062 { 00:21:08.062 "name": "nvmf_tgt_poll_group_000", 00:21:08.062 "admin_qpairs": 1, 00:21:08.062 "io_qpairs": 1, 00:21:08.062 "current_admin_qpairs": 1, 00:21:08.062 "current_io_qpairs": 1, 00:21:08.062 "pending_bdev_io": 0, 00:21:08.062 "completed_nvme_io": 19841, 00:21:08.062 "transports": [ 00:21:08.062 { 00:21:08.062 "trtype": "TCP" 00:21:08.062 } 00:21:08.062 ] 00:21:08.062 }, 00:21:08.062 { 00:21:08.062 "name": "nvmf_tgt_poll_group_001", 00:21:08.062 "admin_qpairs": 0, 00:21:08.062 "io_qpairs": 1, 00:21:08.062 "current_admin_qpairs": 0, 00:21:08.062 "current_io_qpairs": 1, 00:21:08.062 "pending_bdev_io": 0, 00:21:08.062 "completed_nvme_io": 20043, 00:21:08.062 "transports": [ 00:21:08.062 { 00:21:08.062 "trtype": "TCP" 00:21:08.062 } 00:21:08.062 ] 00:21:08.062 }, 00:21:08.062 { 00:21:08.062 "name": "nvmf_tgt_poll_group_002", 00:21:08.062 "admin_qpairs": 0, 00:21:08.062 "io_qpairs": 1, 00:21:08.062 "current_admin_qpairs": 0, 00:21:08.062 "current_io_qpairs": 1, 00:21:08.062 "pending_bdev_io": 0, 00:21:08.062 "completed_nvme_io": 20322, 00:21:08.062 "transports": [ 00:21:08.062 { 00:21:08.062 "trtype": "TCP" 00:21:08.062 } 00:21:08.062 ] 00:21:08.062 }, 00:21:08.062 { 00:21:08.062 "name": "nvmf_tgt_poll_group_003", 00:21:08.062 "admin_qpairs": 0, 00:21:08.062 "io_qpairs": 1, 00:21:08.062 "current_admin_qpairs": 0, 00:21:08.062 "current_io_qpairs": 1, 00:21:08.062 "pending_bdev_io": 0, 00:21:08.062 "completed_nvme_io": 19753, 00:21:08.062 "transports": [ 00:21:08.062 { 00:21:08.062 "trtype": "TCP" 00:21:08.062 } 00:21:08.062 ] 00:21:08.062 } 00:21:08.062 ] 00:21:08.062 }' 00:21:08.062 19:13:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:08.062 19:13:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:08.062 19:13:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:08.062 19:13:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:08.062 19:13:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 356750 00:21:16.182 Initializing NVMe Controllers 00:21:16.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:16.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:16.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:16.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:16.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:16.182 Initialization complete. Launching workers. 00:21:16.182 ======================================================== 00:21:16.182 Latency(us) 00:21:16.182 Device Information : IOPS MiB/s Average min max 00:21:16.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10389.32 40.58 6159.60 1733.15 10358.35 00:21:16.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10598.21 41.40 6038.58 2038.55 10220.20 00:21:16.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10701.21 41.80 5980.71 1864.72 10493.14 00:21:16.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10545.22 41.19 6069.80 1842.81 10600.77 00:21:16.182 ======================================================== 00:21:16.182 Total : 42233.97 164.98 6061.48 1733.15 10600.77 00:21:16.182 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:16.182 rmmod nvme_tcp 00:21:16.182 rmmod nvme_fabrics 00:21:16.182 rmmod nvme_keyring 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 356497 ']' 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 356497 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 356497 ']' 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 356497 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.182 19:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 356497 00:21:16.442 19:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:16.442 19:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:16.442 19:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 356497' 00:21:16.442 killing process with pid 356497 00:21:16.442 19:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 356497 00:21:16.442 19:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 356497 00:21:16.442 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:16.442 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:16.442 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:16.442 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:16.442 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:16.442 19:13:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.442 19:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.442 19:13:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.982 19:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:18.982 19:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:18.982 19:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:19.920 19:13:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:21.823 19:13:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:27.106 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:27.106 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:27.106 Found net devices under 0000:86:00.0: cvl_0_0 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:27.106 Found net devices under 0000:86:00.1: cvl_0_1 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:27.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:21:27.106 00:21:27.106 --- 10.0.0.2 ping statistics --- 00:21:27.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.106 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:21:27.106 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:21:27.107 00:21:27.107 --- 10.0.0.1 ping statistics --- 00:21:27.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.107 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:27.107 net.core.busy_poll = 1 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:27.107 net.core.busy_read = 1 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:27.107 19:13:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:27.366 19:13:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:27.366 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:27.366 19:13:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:27.366 19:13:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.366 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=360527 00:21:27.366 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 360527 00:21:27.367 19:13:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:27.367 19:13:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 360527 ']' 00:21:27.367 19:13:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.367 19:13:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.367 19:13:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.367 19:13:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.367 19:13:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.367 [2024-07-12 19:13:29.742250] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:21:27.367 [2024-07-12 19:13:29.742292] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.367 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.367 [2024-07-12 19:13:29.812884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.367 [2024-07-12 19:13:29.890603] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.367 [2024-07-12 19:13:29.890641] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.367 [2024-07-12 19:13:29.890648] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.367 [2024-07-12 19:13:29.890654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.367 [2024-07-12 19:13:29.890659] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.367 [2024-07-12 19:13:29.890771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.367 [2024-07-12 19:13:29.890814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.367 [2024-07-12 19:13:29.890900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.367 [2024-07-12 19:13:29.890901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.305 [2024-07-12 19:13:30.734197] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.305 Malloc1 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.305 [2024-07-12 19:13:30.786298] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=360700 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:28.305 19:13:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:28.305 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.843 19:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:30.843 19:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.843 19:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.843 19:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.843 19:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:30.843 "tick_rate": 2300000000, 00:21:30.843 "poll_groups": [ 00:21:30.843 { 00:21:30.843 "name": "nvmf_tgt_poll_group_000", 00:21:30.843 "admin_qpairs": 1, 00:21:30.843 "io_qpairs": 0, 00:21:30.843 "current_admin_qpairs": 1, 00:21:30.843 "current_io_qpairs": 0, 00:21:30.843 "pending_bdev_io": 0, 00:21:30.843 "completed_nvme_io": 0, 00:21:30.843 "transports": [ 00:21:30.843 { 00:21:30.843 "trtype": "TCP" 00:21:30.843 } 00:21:30.843 ] 00:21:30.843 }, 00:21:30.843 { 00:21:30.843 "name": "nvmf_tgt_poll_group_001", 00:21:30.843 "admin_qpairs": 0, 00:21:30.843 "io_qpairs": 4, 00:21:30.843 "current_admin_qpairs": 0, 00:21:30.843 "current_io_qpairs": 4, 00:21:30.843 "pending_bdev_io": 0, 00:21:30.843 "completed_nvme_io": 43882, 00:21:30.843 "transports": [ 00:21:30.843 { 00:21:30.843 "trtype": "TCP" 00:21:30.843 } 00:21:30.843 ] 00:21:30.843 }, 00:21:30.843 { 00:21:30.843 "name": "nvmf_tgt_poll_group_002", 00:21:30.843 "admin_qpairs": 0, 00:21:30.843 "io_qpairs": 0, 00:21:30.843 "current_admin_qpairs": 0, 00:21:30.843 "current_io_qpairs": 0, 00:21:30.843 "pending_bdev_io": 0, 00:21:30.843 "completed_nvme_io": 0, 00:21:30.843 "transports": [ 00:21:30.843 { 00:21:30.843 "trtype": "TCP" 00:21:30.843 } 00:21:30.843 ] 00:21:30.843 }, 00:21:30.843 { 00:21:30.843 "name": "nvmf_tgt_poll_group_003", 00:21:30.843 "admin_qpairs": 0, 00:21:30.843 "io_qpairs": 0, 00:21:30.843 "current_admin_qpairs": 0, 00:21:30.843 "current_io_qpairs": 0, 00:21:30.843 "pending_bdev_io": 0, 00:21:30.843 "completed_nvme_io": 0, 00:21:30.843 "transports": [ 00:21:30.843 { 00:21:30.843 "trtype": "TCP" 00:21:30.843 } 00:21:30.843 ] 00:21:30.843 } 00:21:30.843 ] 00:21:30.843 }' 00:21:30.843 19:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:30.843 19:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:30.843 19:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:21:30.843 19:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:21:30.843 19:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 360700 00:21:38.983 Initializing NVMe Controllers 00:21:38.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:38.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:38.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:38.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:38.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:38.984 Initialization complete. Launching workers. 00:21:38.984 ======================================================== 00:21:38.984 Latency(us) 00:21:38.984 Device Information : IOPS MiB/s Average min max 00:21:38.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5636.50 22.02 11381.50 1520.25 57526.22 00:21:38.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5978.70 23.35 10739.34 1403.16 56742.17 00:21:38.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5768.00 22.53 11095.15 1519.62 58829.01 00:21:38.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5809.10 22.69 11053.31 1474.32 55324.19 00:21:38.984 ======================================================== 00:21:38.984 Total : 23192.30 90.59 11062.54 1403.16 58829.01 00:21:38.984 00:21:38.984 19:13:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:38.984 19:13:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:38.984 19:13:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:38.984 19:13:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:38.984 19:13:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:38.984 19:13:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:38.984 19:13:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:38.984 rmmod nvme_tcp 00:21:38.984 rmmod nvme_fabrics 00:21:38.984 rmmod nvme_keyring 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 360527 ']' 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 360527 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 360527 ']' 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 360527 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 360527 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 360527' 00:21:38.984 killing process with pid 360527 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 360527 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 360527 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.984 19:13:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.270 19:13:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:42.270 19:13:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:42.270 00:21:42.270 real 0m52.095s 00:21:42.270 user 2m49.148s 00:21:42.270 sys 0m10.976s 00:21:42.270 19:13:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:42.270 19:13:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.270 ************************************ 00:21:42.270 END TEST nvmf_perf_adq 00:21:42.270 ************************************ 00:21:42.270 19:13:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:42.270 19:13:44 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:42.270 19:13:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:42.270 19:13:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:42.270 19:13:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:42.270 ************************************ 00:21:42.270 START TEST nvmf_shutdown 00:21:42.270 ************************************ 00:21:42.270 19:13:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:42.270 * Looking for test storage... 00:21:42.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:42.270 19:13:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.270 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:42.270 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.270 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.270 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:42.271 ************************************ 00:21:42.271 START TEST nvmf_shutdown_tc1 00:21:42.271 ************************************ 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:42.271 19:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:47.551 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:47.551 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.551 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:47.552 Found net devices under 0000:86:00.0: cvl_0_0 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:47.552 Found net devices under 0000:86:00.1: cvl_0_1 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.552 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:47.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:21:47.812 00:21:47.812 --- 10.0.0.2 ping statistics --- 00:21:47.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.812 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:21:47.812 00:21:47.812 --- 10.0.0.1 ping statistics --- 00:21:47.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.812 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=366002 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 366002 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 366002 ']' 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:47.812 19:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:48.072 [2024-07-12 19:13:50.428378] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:21:48.072 [2024-07-12 19:13:50.428428] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.072 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.072 [2024-07-12 19:13:50.505160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:48.072 [2024-07-12 19:13:50.586220] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.072 [2024-07-12 19:13:50.586262] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.072 [2024-07-12 19:13:50.586270] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.072 [2024-07-12 19:13:50.586277] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.072 [2024-07-12 19:13:50.586283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.072 [2024-07-12 19:13:50.586336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.072 [2024-07-12 19:13:50.586360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.072 [2024-07-12 19:13:50.586466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.072 [2024-07-12 19:13:50.586467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.011 [2024-07-12 19:13:51.274511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.011 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.012 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:49.012 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:49.012 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.012 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.012 Malloc1 00:21:49.012 [2024-07-12 19:13:51.370248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.012 Malloc2 00:21:49.012 Malloc3 00:21:49.012 Malloc4 00:21:49.012 Malloc5 00:21:49.012 Malloc6 00:21:49.271 Malloc7 00:21:49.271 Malloc8 00:21:49.271 Malloc9 00:21:49.271 Malloc10 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=366287 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 366287 /var/tmp/bdevperf.sock 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 366287 ']' 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.271 { 00:21:49.271 "params": { 00:21:49.271 "name": "Nvme$subsystem", 00:21:49.271 "trtype": "$TEST_TRANSPORT", 00:21:49.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.271 "adrfam": "ipv4", 00:21:49.271 "trsvcid": "$NVMF_PORT", 00:21:49.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.271 "hdgst": ${hdgst:-false}, 00:21:49.271 "ddgst": ${ddgst:-false} 00:21:49.271 }, 00:21:49.271 "method": "bdev_nvme_attach_controller" 00:21:49.271 } 00:21:49.271 EOF 00:21:49.271 )") 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.271 { 00:21:49.271 "params": { 00:21:49.271 "name": "Nvme$subsystem", 00:21:49.271 "trtype": "$TEST_TRANSPORT", 00:21:49.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.271 "adrfam": "ipv4", 00:21:49.271 "trsvcid": "$NVMF_PORT", 00:21:49.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.271 "hdgst": ${hdgst:-false}, 00:21:49.271 "ddgst": ${ddgst:-false} 00:21:49.271 }, 00:21:49.271 "method": "bdev_nvme_attach_controller" 00:21:49.271 } 00:21:49.271 EOF 00:21:49.271 )") 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.271 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.271 { 00:21:49.271 "params": { 00:21:49.271 "name": "Nvme$subsystem", 00:21:49.271 "trtype": "$TEST_TRANSPORT", 00:21:49.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.271 "adrfam": "ipv4", 00:21:49.271 "trsvcid": "$NVMF_PORT", 00:21:49.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.271 "hdgst": ${hdgst:-false}, 00:21:49.271 "ddgst": ${ddgst:-false} 00:21:49.271 }, 00:21:49.271 "method": "bdev_nvme_attach_controller" 00:21:49.271 } 00:21:49.271 EOF 00:21:49.272 )") 00:21:49.272 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.272 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.272 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.272 { 00:21:49.272 "params": { 00:21:49.272 "name": "Nvme$subsystem", 00:21:49.272 "trtype": "$TEST_TRANSPORT", 00:21:49.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.272 "adrfam": "ipv4", 00:21:49.272 "trsvcid": "$NVMF_PORT", 00:21:49.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.272 "hdgst": ${hdgst:-false}, 00:21:49.272 "ddgst": ${ddgst:-false} 00:21:49.272 }, 00:21:49.272 "method": "bdev_nvme_attach_controller" 00:21:49.272 } 00:21:49.272 EOF 00:21:49.272 )") 00:21:49.272 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.272 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.272 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.272 { 00:21:49.272 "params": { 00:21:49.272 "name": "Nvme$subsystem", 00:21:49.272 "trtype": "$TEST_TRANSPORT", 00:21:49.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.272 "adrfam": "ipv4", 00:21:49.272 "trsvcid": "$NVMF_PORT", 00:21:49.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.272 "hdgst": ${hdgst:-false}, 00:21:49.272 "ddgst": ${ddgst:-false} 00:21:49.272 }, 00:21:49.272 "method": "bdev_nvme_attach_controller" 00:21:49.272 } 00:21:49.272 EOF 00:21:49.272 )") 00:21:49.272 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.272 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.272 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.272 { 00:21:49.272 "params": { 00:21:49.272 "name": "Nvme$subsystem", 00:21:49.272 "trtype": "$TEST_TRANSPORT", 00:21:49.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.272 "adrfam": "ipv4", 00:21:49.272 "trsvcid": "$NVMF_PORT", 00:21:49.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.272 "hdgst": ${hdgst:-false}, 00:21:49.272 "ddgst": ${ddgst:-false} 00:21:49.272 }, 00:21:49.272 "method": "bdev_nvme_attach_controller" 00:21:49.272 } 00:21:49.272 EOF 00:21:49.272 )") 00:21:49.272 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.272 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.272 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.272 { 00:21:49.272 "params": { 00:21:49.272 "name": "Nvme$subsystem", 00:21:49.272 "trtype": "$TEST_TRANSPORT", 00:21:49.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.272 "adrfam": "ipv4", 00:21:49.272 "trsvcid": "$NVMF_PORT", 00:21:49.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.272 "hdgst": ${hdgst:-false}, 00:21:49.272 "ddgst": ${ddgst:-false} 00:21:49.272 }, 00:21:49.272 "method": "bdev_nvme_attach_controller" 00:21:49.272 } 00:21:49.272 EOF 00:21:49.272 )") 00:21:49.532 [2024-07-12 19:13:51.839506] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:21:49.532 [2024-07-12 19:13:51.839555] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:49.532 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.532 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.532 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.532 { 00:21:49.532 "params": { 00:21:49.532 "name": "Nvme$subsystem", 00:21:49.532 "trtype": "$TEST_TRANSPORT", 00:21:49.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.532 "adrfam": "ipv4", 00:21:49.532 "trsvcid": "$NVMF_PORT", 00:21:49.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.532 "hdgst": ${hdgst:-false}, 00:21:49.532 "ddgst": ${ddgst:-false} 00:21:49.532 }, 00:21:49.532 "method": "bdev_nvme_attach_controller" 00:21:49.532 } 00:21:49.532 EOF 00:21:49.532 )") 00:21:49.532 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.532 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.532 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.532 { 00:21:49.532 "params": { 00:21:49.532 "name": "Nvme$subsystem", 00:21:49.532 "trtype": "$TEST_TRANSPORT", 00:21:49.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.532 "adrfam": "ipv4", 00:21:49.532 "trsvcid": "$NVMF_PORT", 00:21:49.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.532 "hdgst": ${hdgst:-false}, 00:21:49.532 "ddgst": ${ddgst:-false} 00:21:49.532 }, 00:21:49.532 "method": "bdev_nvme_attach_controller" 00:21:49.532 } 00:21:49.532 EOF 00:21:49.532 )") 00:21:49.532 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.532 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:49.532 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:49.532 { 00:21:49.532 "params": { 00:21:49.532 "name": "Nvme$subsystem", 00:21:49.532 "trtype": "$TEST_TRANSPORT", 00:21:49.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.532 "adrfam": "ipv4", 00:21:49.532 "trsvcid": "$NVMF_PORT", 00:21:49.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.532 "hdgst": ${hdgst:-false}, 00:21:49.532 "ddgst": ${ddgst:-false} 00:21:49.532 }, 00:21:49.532 "method": "bdev_nvme_attach_controller" 00:21:49.532 } 00:21:49.532 EOF 00:21:49.532 )") 00:21:49.532 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:49.533 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:49.533 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.533 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:49.533 19:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:49.533 "params": { 00:21:49.533 "name": "Nvme1", 00:21:49.533 "trtype": "tcp", 00:21:49.533 "traddr": "10.0.0.2", 00:21:49.533 "adrfam": "ipv4", 00:21:49.533 "trsvcid": "4420", 00:21:49.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.533 "hdgst": false, 00:21:49.533 "ddgst": false 00:21:49.533 }, 00:21:49.533 "method": "bdev_nvme_attach_controller" 00:21:49.533 },{ 00:21:49.533 "params": { 00:21:49.533 "name": "Nvme2", 00:21:49.533 "trtype": "tcp", 00:21:49.533 "traddr": "10.0.0.2", 00:21:49.533 "adrfam": "ipv4", 00:21:49.533 "trsvcid": "4420", 00:21:49.533 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:49.533 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:49.533 "hdgst": false, 00:21:49.533 "ddgst": false 00:21:49.533 }, 00:21:49.533 "method": "bdev_nvme_attach_controller" 00:21:49.533 },{ 00:21:49.533 "params": { 00:21:49.533 "name": "Nvme3", 00:21:49.533 "trtype": "tcp", 00:21:49.533 "traddr": "10.0.0.2", 00:21:49.533 "adrfam": "ipv4", 00:21:49.533 "trsvcid": "4420", 00:21:49.533 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:49.533 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:49.533 "hdgst": false, 00:21:49.533 "ddgst": false 00:21:49.533 }, 00:21:49.533 "method": "bdev_nvme_attach_controller" 00:21:49.533 },{ 00:21:49.533 "params": { 00:21:49.533 "name": "Nvme4", 00:21:49.533 "trtype": "tcp", 00:21:49.533 "traddr": "10.0.0.2", 00:21:49.533 "adrfam": "ipv4", 00:21:49.533 "trsvcid": "4420", 00:21:49.533 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:49.533 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:49.533 "hdgst": false, 00:21:49.533 "ddgst": false 00:21:49.533 }, 00:21:49.533 "method": "bdev_nvme_attach_controller" 00:21:49.533 },{ 00:21:49.533 "params": { 00:21:49.533 "name": "Nvme5", 00:21:49.533 "trtype": "tcp", 00:21:49.533 "traddr": "10.0.0.2", 00:21:49.533 "adrfam": "ipv4", 00:21:49.533 "trsvcid": "4420", 00:21:49.533 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:49.533 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:49.533 "hdgst": false, 00:21:49.533 "ddgst": false 00:21:49.533 }, 00:21:49.533 "method": "bdev_nvme_attach_controller" 00:21:49.533 },{ 00:21:49.533 "params": { 00:21:49.533 "name": "Nvme6", 00:21:49.533 "trtype": "tcp", 00:21:49.533 "traddr": "10.0.0.2", 00:21:49.533 "adrfam": "ipv4", 00:21:49.533 "trsvcid": "4420", 00:21:49.533 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:49.533 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:49.533 "hdgst": false, 00:21:49.533 "ddgst": false 00:21:49.533 }, 00:21:49.533 "method": "bdev_nvme_attach_controller" 00:21:49.533 },{ 00:21:49.533 "params": { 00:21:49.533 "name": "Nvme7", 00:21:49.533 "trtype": "tcp", 00:21:49.533 "traddr": "10.0.0.2", 00:21:49.533 "adrfam": "ipv4", 00:21:49.533 "trsvcid": "4420", 00:21:49.533 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:49.533 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:49.533 "hdgst": false, 00:21:49.533 "ddgst": false 00:21:49.533 }, 00:21:49.533 "method": "bdev_nvme_attach_controller" 00:21:49.533 },{ 00:21:49.533 "params": { 00:21:49.533 "name": "Nvme8", 00:21:49.533 "trtype": "tcp", 00:21:49.533 "traddr": "10.0.0.2", 00:21:49.533 "adrfam": "ipv4", 00:21:49.533 "trsvcid": "4420", 00:21:49.533 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:49.533 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:49.533 "hdgst": false, 00:21:49.533 "ddgst": false 00:21:49.533 }, 00:21:49.533 "method": "bdev_nvme_attach_controller" 00:21:49.533 },{ 00:21:49.533 "params": { 00:21:49.533 "name": "Nvme9", 00:21:49.533 "trtype": "tcp", 00:21:49.533 "traddr": "10.0.0.2", 00:21:49.533 "adrfam": "ipv4", 00:21:49.533 "trsvcid": "4420", 00:21:49.533 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:49.533 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:49.533 "hdgst": false, 00:21:49.533 "ddgst": false 00:21:49.533 }, 00:21:49.533 "method": "bdev_nvme_attach_controller" 00:21:49.533 },{ 00:21:49.533 "params": { 00:21:49.533 "name": "Nvme10", 00:21:49.533 "trtype": "tcp", 00:21:49.533 "traddr": "10.0.0.2", 00:21:49.533 "adrfam": "ipv4", 00:21:49.533 "trsvcid": "4420", 00:21:49.533 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:49.533 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:49.533 "hdgst": false, 00:21:49.533 "ddgst": false 00:21:49.533 }, 00:21:49.533 "method": "bdev_nvme_attach_controller" 00:21:49.533 }' 00:21:49.533 [2024-07-12 19:13:51.910623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.533 [2024-07-12 19:13:51.983611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.912 19:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.912 19:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:50.912 19:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:50.912 19:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.912 19:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:50.912 19:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.912 19:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 366287 00:21:50.912 19:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:50.912 19:13:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:51.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 366287 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:51.852 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 366002 00:21:51.852 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:51.852 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:51.852 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:51.852 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:51.852 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.852 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.852 { 00:21:51.852 "params": { 00:21:51.852 "name": "Nvme$subsystem", 00:21:51.852 "trtype": "$TEST_TRANSPORT", 00:21:51.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.852 "adrfam": "ipv4", 00:21:51.852 "trsvcid": "$NVMF_PORT", 00:21:51.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.852 "hdgst": ${hdgst:-false}, 00:21:51.852 "ddgst": ${ddgst:-false} 00:21:51.852 }, 00:21:51.852 "method": "bdev_nvme_attach_controller" 00:21:51.852 } 00:21:51.852 EOF 00:21:51.852 )") 00:21:51.852 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.852 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.852 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.852 { 00:21:51.852 "params": { 00:21:51.852 "name": "Nvme$subsystem", 00:21:51.852 "trtype": "$TEST_TRANSPORT", 00:21:51.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.852 "adrfam": "ipv4", 00:21:51.852 "trsvcid": "$NVMF_PORT", 00:21:51.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.852 "hdgst": ${hdgst:-false}, 00:21:51.852 "ddgst": ${ddgst:-false} 00:21:51.852 }, 00:21:51.852 "method": "bdev_nvme_attach_controller" 00:21:51.852 } 00:21:51.852 EOF 00:21:51.852 )") 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.853 { 00:21:51.853 "params": { 00:21:51.853 "name": "Nvme$subsystem", 00:21:51.853 "trtype": "$TEST_TRANSPORT", 00:21:51.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.853 "adrfam": "ipv4", 00:21:51.853 "trsvcid": "$NVMF_PORT", 00:21:51.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.853 "hdgst": ${hdgst:-false}, 00:21:51.853 "ddgst": ${ddgst:-false} 00:21:51.853 }, 00:21:51.853 "method": "bdev_nvme_attach_controller" 00:21:51.853 } 00:21:51.853 EOF 00:21:51.853 )") 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.853 { 00:21:51.853 "params": { 00:21:51.853 "name": "Nvme$subsystem", 00:21:51.853 "trtype": "$TEST_TRANSPORT", 00:21:51.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.853 "adrfam": "ipv4", 00:21:51.853 "trsvcid": "$NVMF_PORT", 00:21:51.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.853 "hdgst": ${hdgst:-false}, 00:21:51.853 "ddgst": ${ddgst:-false} 00:21:51.853 }, 00:21:51.853 "method": "bdev_nvme_attach_controller" 00:21:51.853 } 00:21:51.853 EOF 00:21:51.853 )") 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.853 { 00:21:51.853 "params": { 00:21:51.853 "name": "Nvme$subsystem", 00:21:51.853 "trtype": "$TEST_TRANSPORT", 00:21:51.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.853 "adrfam": "ipv4", 00:21:51.853 "trsvcid": "$NVMF_PORT", 00:21:51.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.853 "hdgst": ${hdgst:-false}, 00:21:51.853 "ddgst": ${ddgst:-false} 00:21:51.853 }, 00:21:51.853 "method": "bdev_nvme_attach_controller" 00:21:51.853 } 00:21:51.853 EOF 00:21:51.853 )") 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.853 { 00:21:51.853 "params": { 00:21:51.853 "name": "Nvme$subsystem", 00:21:51.853 "trtype": "$TEST_TRANSPORT", 00:21:51.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.853 "adrfam": "ipv4", 00:21:51.853 "trsvcid": "$NVMF_PORT", 00:21:51.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.853 "hdgst": ${hdgst:-false}, 00:21:51.853 "ddgst": ${ddgst:-false} 00:21:51.853 }, 00:21:51.853 "method": "bdev_nvme_attach_controller" 00:21:51.853 } 00:21:51.853 EOF 00:21:51.853 )") 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.853 { 00:21:51.853 "params": { 00:21:51.853 "name": "Nvme$subsystem", 00:21:51.853 "trtype": "$TEST_TRANSPORT", 00:21:51.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.853 "adrfam": "ipv4", 00:21:51.853 "trsvcid": "$NVMF_PORT", 00:21:51.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.853 "hdgst": ${hdgst:-false}, 00:21:51.853 "ddgst": ${ddgst:-false} 00:21:51.853 }, 00:21:51.853 "method": "bdev_nvme_attach_controller" 00:21:51.853 } 00:21:51.853 EOF 00:21:51.853 )") 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.853 [2024-07-12 19:13:54.391919] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:21:51.853 [2024-07-12 19:13:54.391969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366767 ] 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.853 { 00:21:51.853 "params": { 00:21:51.853 "name": "Nvme$subsystem", 00:21:51.853 "trtype": "$TEST_TRANSPORT", 00:21:51.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.853 "adrfam": "ipv4", 00:21:51.853 "trsvcid": "$NVMF_PORT", 00:21:51.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.853 "hdgst": ${hdgst:-false}, 00:21:51.853 "ddgst": ${ddgst:-false} 00:21:51.853 }, 00:21:51.853 "method": "bdev_nvme_attach_controller" 00:21:51.853 } 00:21:51.853 EOF 00:21:51.853 )") 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.853 { 00:21:51.853 "params": { 00:21:51.853 "name": "Nvme$subsystem", 00:21:51.853 "trtype": "$TEST_TRANSPORT", 00:21:51.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.853 "adrfam": "ipv4", 00:21:51.853 "trsvcid": "$NVMF_PORT", 00:21:51.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.853 "hdgst": ${hdgst:-false}, 00:21:51.853 "ddgst": ${ddgst:-false} 00:21:51.853 }, 00:21:51.853 "method": "bdev_nvme_attach_controller" 00:21:51.853 } 00:21:51.853 EOF 00:21:51.853 )") 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.853 { 00:21:51.853 "params": { 00:21:51.853 "name": "Nvme$subsystem", 00:21:51.853 "trtype": "$TEST_TRANSPORT", 00:21:51.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.853 "adrfam": "ipv4", 00:21:51.853 "trsvcid": "$NVMF_PORT", 00:21:51.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.853 "hdgst": ${hdgst:-false}, 00:21:51.853 "ddgst": ${ddgst:-false} 00:21:51.853 }, 00:21:51.853 "method": "bdev_nvme_attach_controller" 00:21:51.853 } 00:21:51.853 EOF 00:21:51.853 )") 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:51.853 19:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:51.853 "params": { 00:21:51.853 "name": "Nvme1", 00:21:51.853 "trtype": "tcp", 00:21:51.853 "traddr": "10.0.0.2", 00:21:51.853 "adrfam": "ipv4", 00:21:51.853 "trsvcid": "4420", 00:21:51.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:51.853 "hdgst": false, 00:21:51.854 "ddgst": false 00:21:51.854 }, 00:21:51.854 "method": "bdev_nvme_attach_controller" 00:21:51.854 },{ 00:21:51.854 "params": { 00:21:51.854 "name": "Nvme2", 00:21:51.854 "trtype": "tcp", 00:21:51.854 "traddr": "10.0.0.2", 00:21:51.854 "adrfam": "ipv4", 00:21:51.854 "trsvcid": "4420", 00:21:51.854 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:51.854 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:51.854 "hdgst": false, 00:21:51.854 "ddgst": false 00:21:51.854 }, 00:21:51.854 "method": "bdev_nvme_attach_controller" 00:21:51.854 },{ 00:21:51.854 "params": { 00:21:51.854 "name": "Nvme3", 00:21:51.854 "trtype": "tcp", 00:21:51.854 "traddr": "10.0.0.2", 00:21:51.854 "adrfam": "ipv4", 00:21:51.854 "trsvcid": "4420", 00:21:51.854 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:51.854 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:51.854 "hdgst": false, 00:21:51.854 "ddgst": false 00:21:51.854 }, 00:21:51.854 "method": "bdev_nvme_attach_controller" 00:21:51.854 },{ 00:21:51.854 "params": { 00:21:51.854 "name": "Nvme4", 00:21:51.854 "trtype": "tcp", 00:21:51.854 "traddr": "10.0.0.2", 00:21:51.854 "adrfam": "ipv4", 00:21:51.854 "trsvcid": "4420", 00:21:51.854 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:51.854 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:51.854 "hdgst": false, 00:21:51.854 "ddgst": false 00:21:51.854 }, 00:21:51.854 "method": "bdev_nvme_attach_controller" 00:21:51.854 },{ 00:21:51.854 "params": { 00:21:51.854 "name": "Nvme5", 00:21:51.854 "trtype": "tcp", 00:21:51.854 "traddr": "10.0.0.2", 00:21:51.854 "adrfam": "ipv4", 00:21:51.854 "trsvcid": "4420", 00:21:51.854 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:51.854 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:51.854 "hdgst": false, 00:21:51.854 "ddgst": false 00:21:51.854 }, 00:21:51.854 "method": "bdev_nvme_attach_controller" 00:21:51.854 },{ 00:21:51.854 "params": { 00:21:51.854 "name": "Nvme6", 00:21:51.854 "trtype": "tcp", 00:21:51.854 "traddr": "10.0.0.2", 00:21:51.854 "adrfam": "ipv4", 00:21:51.854 "trsvcid": "4420", 00:21:51.854 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:51.854 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:51.854 "hdgst": false, 00:21:51.854 "ddgst": false 00:21:51.854 }, 00:21:51.854 "method": "bdev_nvme_attach_controller" 00:21:51.854 },{ 00:21:51.854 "params": { 00:21:51.854 "name": "Nvme7", 00:21:51.854 "trtype": "tcp", 00:21:51.854 "traddr": "10.0.0.2", 00:21:51.854 "adrfam": "ipv4", 00:21:51.854 "trsvcid": "4420", 00:21:51.854 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:51.854 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:51.854 "hdgst": false, 00:21:51.854 "ddgst": false 00:21:51.854 }, 00:21:51.854 "method": "bdev_nvme_attach_controller" 00:21:51.854 },{ 00:21:51.854 "params": { 00:21:51.854 "name": "Nvme8", 00:21:51.854 "trtype": "tcp", 00:21:51.854 "traddr": "10.0.0.2", 00:21:51.854 "adrfam": "ipv4", 00:21:51.854 "trsvcid": "4420", 00:21:51.854 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:51.854 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:51.854 "hdgst": false, 00:21:51.854 "ddgst": false 00:21:51.854 }, 00:21:51.854 "method": "bdev_nvme_attach_controller" 00:21:51.854 },{ 00:21:51.854 "params": { 00:21:51.854 "name": "Nvme9", 00:21:51.854 "trtype": "tcp", 00:21:51.854 "traddr": "10.0.0.2", 00:21:51.854 "adrfam": "ipv4", 00:21:51.854 "trsvcid": "4420", 00:21:51.854 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:51.854 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:51.854 "hdgst": false, 00:21:51.854 "ddgst": false 00:21:51.854 }, 00:21:51.854 "method": "bdev_nvme_attach_controller" 00:21:51.854 },{ 00:21:51.854 "params": { 00:21:51.854 "name": "Nvme10", 00:21:51.854 "trtype": "tcp", 00:21:51.854 "traddr": "10.0.0.2", 00:21:51.854 "adrfam": "ipv4", 00:21:51.854 "trsvcid": "4420", 00:21:51.854 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:51.854 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:51.854 "hdgst": false, 00:21:51.854 "ddgst": false 00:21:51.854 }, 00:21:51.854 "method": "bdev_nvme_attach_controller" 00:21:51.854 }' 00:21:51.854 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.113 [2024-07-12 19:13:54.458577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.113 [2024-07-12 19:13:54.532684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.493 Running I/O for 1 seconds... 00:21:54.431 00:21:54.431 Latency(us) 00:21:54.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.431 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:54.431 Verification LBA range: start 0x0 length 0x400 00:21:54.431 Nvme1n1 : 1.07 238.88 14.93 0.00 0.00 265364.93 16070.57 231598.53 00:21:54.431 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:54.431 Verification LBA range: start 0x0 length 0x400 00:21:54.431 Nvme2n1 : 1.14 281.24 17.58 0.00 0.00 221746.40 16868.40 215186.03 00:21:54.431 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:54.431 Verification LBA range: start 0x0 length 0x400 00:21:54.431 Nvme3n1 : 1.07 309.90 19.37 0.00 0.00 196681.80 5841.25 199685.34 00:21:54.431 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:54.431 Verification LBA range: start 0x0 length 0x400 00:21:54.431 Nvme4n1 : 1.13 284.09 17.76 0.00 0.00 213701.50 13506.11 220656.86 00:21:54.431 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:54.431 Verification LBA range: start 0x0 length 0x400 00:21:54.431 Nvme5n1 : 1.14 285.04 17.81 0.00 0.00 208796.45 6468.12 200597.15 00:21:54.431 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:54.431 Verification LBA range: start 0x0 length 0x400 00:21:54.431 Nvme6n1 : 1.14 283.16 17.70 0.00 0.00 208061.67 4074.63 215186.03 00:21:54.431 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:54.431 Verification LBA range: start 0x0 length 0x400 00:21:54.431 Nvme7n1 : 1.13 282.13 17.63 0.00 0.00 205783.40 17438.27 211538.81 00:21:54.431 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:54.431 Verification LBA range: start 0x0 length 0x400 00:21:54.431 Nvme8n1 : 1.14 280.29 17.52 0.00 0.00 204106.00 16754.42 217009.64 00:21:54.431 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:54.431 Verification LBA range: start 0x0 length 0x400 00:21:54.431 Nvme9n1 : 1.15 281.91 17.62 0.00 0.00 199935.63 15728.64 224304.08 00:21:54.431 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:54.431 Verification LBA range: start 0x0 length 0x400 00:21:54.431 Nvme10n1 : 1.15 281.20 17.58 0.00 0.00 197385.43 8605.16 238892.97 00:21:54.431 =================================================================================================================== 00:21:54.431 Total : 2807.84 175.49 0.00 0.00 210973.29 4074.63 238892.97 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.691 rmmod nvme_tcp 00:21:54.691 rmmod nvme_fabrics 00:21:54.691 rmmod nvme_keyring 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 366002 ']' 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 366002 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 366002 ']' 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 366002 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 366002 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 366002' 00:21:54.691 killing process with pid 366002 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 366002 00:21:54.691 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 366002 00:21:55.260 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:55.260 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:55.260 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:55.260 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.260 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.260 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.260 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.260 19:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.169 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:57.169 00:21:57.169 real 0m15.016s 00:21:57.169 user 0m33.157s 00:21:57.169 sys 0m5.631s 00:21:57.169 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:57.169 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:57.169 ************************************ 00:21:57.169 END TEST nvmf_shutdown_tc1 00:21:57.169 ************************************ 00:21:57.169 19:13:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:57.169 19:13:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:57.169 19:13:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:57.169 19:13:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:57.169 19:13:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:57.169 ************************************ 00:21:57.169 START TEST nvmf_shutdown_tc2 00:21:57.169 ************************************ 00:21:57.169 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:57.169 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:57.169 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:57.169 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:57.169 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:57.170 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:57.170 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:57.170 Found net devices under 0000:86:00.0: cvl_0_0 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:57.170 Found net devices under 0000:86:00.1: cvl_0_1 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:57.170 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.430 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.430 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.430 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.430 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:57.430 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.430 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.430 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.430 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:57.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:21:57.430 00:21:57.430 --- 10.0.0.2 ping statistics --- 00:21:57.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.430 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:21:57.430 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:21:57.430 00:21:57.430 --- 10.0.0.1 ping statistics --- 00:21:57.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.430 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:21:57.430 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.430 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:57.430 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:57.431 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.431 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:57.431 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:57.431 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.431 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:57.431 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:57.431 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:57.431 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:57.431 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:57.431 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.698 19:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=367790 00:21:57.698 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 367790 00:21:57.698 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:57.698 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 367790 ']' 00:21:57.698 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.698 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.698 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.698 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.698 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.698 [2024-07-12 19:14:00.057397] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:21:57.698 [2024-07-12 19:14:00.057446] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.698 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.698 [2024-07-12 19:14:00.125245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.698 [2024-07-12 19:14:00.204315] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.698 [2024-07-12 19:14:00.204352] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.698 [2024-07-12 19:14:00.204359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.698 [2024-07-12 19:14:00.204366] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.698 [2024-07-12 19:14:00.204370] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.698 [2024-07-12 19:14:00.204454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.698 [2024-07-12 19:14:00.204559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.698 [2024-07-12 19:14:00.204647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.698 [2024-07-12 19:14:00.204649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:58.314 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.314 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:58.314 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:58.314 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:58.314 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.593 [2024-07-12 19:14:00.909291] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.593 19:14:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.593 Malloc1 00:21:58.593 [2024-07-12 19:14:01.005053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.593 Malloc2 00:21:58.593 Malloc3 00:21:58.593 Malloc4 00:21:58.593 Malloc5 00:21:58.870 Malloc6 00:21:58.870 Malloc7 00:21:58.870 Malloc8 00:21:58.870 Malloc9 00:21:58.870 Malloc10 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=368077 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 368077 /var/tmp/bdevperf.sock 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 368077 ']' 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:58.870 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:58.870 { 00:21:58.870 "params": { 00:21:58.870 "name": "Nvme$subsystem", 00:21:58.870 "trtype": "$TEST_TRANSPORT", 00:21:58.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:58.870 "adrfam": "ipv4", 00:21:58.870 "trsvcid": "$NVMF_PORT", 00:21:58.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:58.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:58.870 "hdgst": ${hdgst:-false}, 00:21:58.870 "ddgst": ${ddgst:-false} 00:21:58.870 }, 00:21:58.870 "method": "bdev_nvme_attach_controller" 00:21:58.870 } 00:21:58.870 EOF 00:21:58.870 )") 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.153 { 00:21:59.153 "params": { 00:21:59.153 "name": "Nvme$subsystem", 00:21:59.153 "trtype": "$TEST_TRANSPORT", 00:21:59.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.153 "adrfam": "ipv4", 00:21:59.153 "trsvcid": "$NVMF_PORT", 00:21:59.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.153 "hdgst": ${hdgst:-false}, 00:21:59.153 "ddgst": ${ddgst:-false} 00:21:59.153 }, 00:21:59.153 "method": "bdev_nvme_attach_controller" 00:21:59.153 } 00:21:59.153 EOF 00:21:59.153 )") 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.153 { 00:21:59.153 "params": { 00:21:59.153 "name": "Nvme$subsystem", 00:21:59.153 "trtype": "$TEST_TRANSPORT", 00:21:59.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.153 "adrfam": "ipv4", 00:21:59.153 "trsvcid": "$NVMF_PORT", 00:21:59.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.153 "hdgst": ${hdgst:-false}, 00:21:59.153 "ddgst": ${ddgst:-false} 00:21:59.153 }, 00:21:59.153 "method": "bdev_nvme_attach_controller" 00:21:59.153 } 00:21:59.153 EOF 00:21:59.153 )") 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.153 { 00:21:59.153 "params": { 00:21:59.153 "name": "Nvme$subsystem", 00:21:59.153 "trtype": "$TEST_TRANSPORT", 00:21:59.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.153 "adrfam": "ipv4", 00:21:59.153 "trsvcid": "$NVMF_PORT", 00:21:59.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.153 "hdgst": ${hdgst:-false}, 00:21:59.153 "ddgst": ${ddgst:-false} 00:21:59.153 }, 00:21:59.153 "method": "bdev_nvme_attach_controller" 00:21:59.153 } 00:21:59.153 EOF 00:21:59.153 )") 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.153 { 00:21:59.153 "params": { 00:21:59.153 "name": "Nvme$subsystem", 00:21:59.153 "trtype": "$TEST_TRANSPORT", 00:21:59.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.153 "adrfam": "ipv4", 00:21:59.153 "trsvcid": "$NVMF_PORT", 00:21:59.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.153 "hdgst": ${hdgst:-false}, 00:21:59.153 "ddgst": ${ddgst:-false} 00:21:59.153 }, 00:21:59.153 "method": "bdev_nvme_attach_controller" 00:21:59.153 } 00:21:59.153 EOF 00:21:59.153 )") 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.153 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.153 { 00:21:59.153 "params": { 00:21:59.153 "name": "Nvme$subsystem", 00:21:59.153 "trtype": "$TEST_TRANSPORT", 00:21:59.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.153 "adrfam": "ipv4", 00:21:59.153 "trsvcid": "$NVMF_PORT", 00:21:59.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.153 "hdgst": ${hdgst:-false}, 00:21:59.154 "ddgst": ${ddgst:-false} 00:21:59.154 }, 00:21:59.154 "method": "bdev_nvme_attach_controller" 00:21:59.154 } 00:21:59.154 EOF 00:21:59.154 )") 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.154 { 00:21:59.154 "params": { 00:21:59.154 "name": "Nvme$subsystem", 00:21:59.154 "trtype": "$TEST_TRANSPORT", 00:21:59.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.154 "adrfam": "ipv4", 00:21:59.154 "trsvcid": "$NVMF_PORT", 00:21:59.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.154 "hdgst": ${hdgst:-false}, 00:21:59.154 "ddgst": ${ddgst:-false} 00:21:59.154 }, 00:21:59.154 "method": "bdev_nvme_attach_controller" 00:21:59.154 } 00:21:59.154 EOF 00:21:59.154 )") 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.154 [2024-07-12 19:14:01.471801] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:21:59.154 [2024-07-12 19:14:01.471851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid368077 ] 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.154 { 00:21:59.154 "params": { 00:21:59.154 "name": "Nvme$subsystem", 00:21:59.154 "trtype": "$TEST_TRANSPORT", 00:21:59.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.154 "adrfam": "ipv4", 00:21:59.154 "trsvcid": "$NVMF_PORT", 00:21:59.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.154 "hdgst": ${hdgst:-false}, 00:21:59.154 "ddgst": ${ddgst:-false} 00:21:59.154 }, 00:21:59.154 "method": "bdev_nvme_attach_controller" 00:21:59.154 } 00:21:59.154 EOF 00:21:59.154 )") 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.154 { 00:21:59.154 "params": { 00:21:59.154 "name": "Nvme$subsystem", 00:21:59.154 "trtype": "$TEST_TRANSPORT", 00:21:59.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.154 "adrfam": "ipv4", 00:21:59.154 "trsvcid": "$NVMF_PORT", 00:21:59.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.154 "hdgst": ${hdgst:-false}, 00:21:59.154 "ddgst": ${ddgst:-false} 00:21:59.154 }, 00:21:59.154 "method": "bdev_nvme_attach_controller" 00:21:59.154 } 00:21:59.154 EOF 00:21:59.154 )") 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.154 { 00:21:59.154 "params": { 00:21:59.154 "name": "Nvme$subsystem", 00:21:59.154 "trtype": "$TEST_TRANSPORT", 00:21:59.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.154 "adrfam": "ipv4", 00:21:59.154 "trsvcid": "$NVMF_PORT", 00:21:59.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.154 "hdgst": ${hdgst:-false}, 00:21:59.154 "ddgst": ${ddgst:-false} 00:21:59.154 }, 00:21:59.154 "method": "bdev_nvme_attach_controller" 00:21:59.154 } 00:21:59.154 EOF 00:21:59.154 )") 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:59.154 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:59.154 19:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:59.154 "params": { 00:21:59.154 "name": "Nvme1", 00:21:59.154 "trtype": "tcp", 00:21:59.154 "traddr": "10.0.0.2", 00:21:59.154 "adrfam": "ipv4", 00:21:59.154 "trsvcid": "4420", 00:21:59.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.154 "hdgst": false, 00:21:59.154 "ddgst": false 00:21:59.154 }, 00:21:59.154 "method": "bdev_nvme_attach_controller" 00:21:59.154 },{ 00:21:59.154 "params": { 00:21:59.154 "name": "Nvme2", 00:21:59.154 "trtype": "tcp", 00:21:59.154 "traddr": "10.0.0.2", 00:21:59.154 "adrfam": "ipv4", 00:21:59.154 "trsvcid": "4420", 00:21:59.154 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:59.154 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:59.154 "hdgst": false, 00:21:59.154 "ddgst": false 00:21:59.154 }, 00:21:59.154 "method": "bdev_nvme_attach_controller" 00:21:59.154 },{ 00:21:59.154 "params": { 00:21:59.154 "name": "Nvme3", 00:21:59.154 "trtype": "tcp", 00:21:59.154 "traddr": "10.0.0.2", 00:21:59.154 "adrfam": "ipv4", 00:21:59.154 "trsvcid": "4420", 00:21:59.154 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:59.154 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:59.154 "hdgst": false, 00:21:59.154 "ddgst": false 00:21:59.154 }, 00:21:59.154 "method": "bdev_nvme_attach_controller" 00:21:59.154 },{ 00:21:59.154 "params": { 00:21:59.154 "name": "Nvme4", 00:21:59.154 "trtype": "tcp", 00:21:59.154 "traddr": "10.0.0.2", 00:21:59.155 "adrfam": "ipv4", 00:21:59.155 "trsvcid": "4420", 00:21:59.155 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:59.155 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:59.155 "hdgst": false, 00:21:59.155 "ddgst": false 00:21:59.155 }, 00:21:59.155 "method": "bdev_nvme_attach_controller" 00:21:59.155 },{ 00:21:59.155 "params": { 00:21:59.155 "name": "Nvme5", 00:21:59.155 "trtype": "tcp", 00:21:59.155 "traddr": "10.0.0.2", 00:21:59.155 "adrfam": "ipv4", 00:21:59.155 "trsvcid": "4420", 00:21:59.155 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:59.155 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:59.155 "hdgst": false, 00:21:59.155 "ddgst": false 00:21:59.155 }, 00:21:59.155 "method": "bdev_nvme_attach_controller" 00:21:59.155 },{ 00:21:59.155 "params": { 00:21:59.155 "name": "Nvme6", 00:21:59.155 "trtype": "tcp", 00:21:59.155 "traddr": "10.0.0.2", 00:21:59.155 "adrfam": "ipv4", 00:21:59.155 "trsvcid": "4420", 00:21:59.155 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:59.155 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:59.155 "hdgst": false, 00:21:59.155 "ddgst": false 00:21:59.155 }, 00:21:59.155 "method": "bdev_nvme_attach_controller" 00:21:59.155 },{ 00:21:59.155 "params": { 00:21:59.155 "name": "Nvme7", 00:21:59.155 "trtype": "tcp", 00:21:59.155 "traddr": "10.0.0.2", 00:21:59.155 "adrfam": "ipv4", 00:21:59.155 "trsvcid": "4420", 00:21:59.155 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:59.155 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:59.155 "hdgst": false, 00:21:59.155 "ddgst": false 00:21:59.155 }, 00:21:59.155 "method": "bdev_nvme_attach_controller" 00:21:59.155 },{ 00:21:59.155 "params": { 00:21:59.155 "name": "Nvme8", 00:21:59.155 "trtype": "tcp", 00:21:59.155 "traddr": "10.0.0.2", 00:21:59.155 "adrfam": "ipv4", 00:21:59.155 "trsvcid": "4420", 00:21:59.155 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:59.155 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:59.155 "hdgst": false, 00:21:59.155 "ddgst": false 00:21:59.155 }, 00:21:59.155 "method": "bdev_nvme_attach_controller" 00:21:59.155 },{ 00:21:59.155 "params": { 00:21:59.155 "name": "Nvme9", 00:21:59.155 "trtype": "tcp", 00:21:59.155 "traddr": "10.0.0.2", 00:21:59.155 "adrfam": "ipv4", 00:21:59.155 "trsvcid": "4420", 00:21:59.155 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:59.155 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:59.155 "hdgst": false, 00:21:59.155 "ddgst": false 00:21:59.155 }, 00:21:59.155 "method": "bdev_nvme_attach_controller" 00:21:59.155 },{ 00:21:59.155 "params": { 00:21:59.155 "name": "Nvme10", 00:21:59.155 "trtype": "tcp", 00:21:59.155 "traddr": "10.0.0.2", 00:21:59.155 "adrfam": "ipv4", 00:21:59.155 "trsvcid": "4420", 00:21:59.155 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:59.155 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:59.155 "hdgst": false, 00:21:59.155 "ddgst": false 00:21:59.155 }, 00:21:59.155 "method": "bdev_nvme_attach_controller" 00:21:59.155 }' 00:21:59.155 [2024-07-12 19:14:01.541710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.155 [2024-07-12 19:14:01.617089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.148 Running I/O for 10 seconds... 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.148 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.414 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.414 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:01.414 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:01.414 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:01.685 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:01.685 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:01.685 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.685 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.685 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.685 19:14:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 368077 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 368077 ']' 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 368077 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 368077 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:01.685 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 368077' 00:22:01.686 killing process with pid 368077 00:22:01.686 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 368077 00:22:01.686 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 368077 00:22:01.686 Received shutdown signal, test time was about 0.896759 seconds 00:22:01.686 00:22:01.686 Latency(us) 00:22:01.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.686 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.686 Verification LBA range: start 0x0 length 0x400 00:22:01.686 Nvme1n1 : 0.89 288.19 18.01 0.00 0.00 219567.42 15956.59 219745.06 00:22:01.686 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.686 Verification LBA range: start 0x0 length 0x400 00:22:01.686 Nvme2n1 : 0.89 291.86 18.24 0.00 0.00 212224.09 2692.67 206979.78 00:22:01.686 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.686 Verification LBA range: start 0x0 length 0x400 00:22:01.686 Nvme3n1 : 0.87 294.34 18.40 0.00 0.00 206991.58 14702.86 214274.23 00:22:01.686 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.686 Verification LBA range: start 0x0 length 0x400 00:22:01.686 Nvme4n1 : 0.88 291.28 18.21 0.00 0.00 205396.37 26898.25 202420.76 00:22:01.686 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.686 Verification LBA range: start 0x0 length 0x400 00:22:01.686 Nvme5n1 : 0.90 285.69 17.86 0.00 0.00 205688.65 18236.10 217921.45 00:22:01.686 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.686 Verification LBA range: start 0x0 length 0x400 00:22:01.686 Nvme6n1 : 0.88 295.48 18.47 0.00 0.00 193913.13 4644.51 213362.42 00:22:01.686 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.686 Verification LBA range: start 0x0 length 0x400 00:22:01.686 Nvme7n1 : 0.88 289.60 18.10 0.00 0.00 194675.76 15272.74 215186.03 00:22:01.686 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.686 Verification LBA range: start 0x0 length 0x400 00:22:01.686 Nvme8n1 : 0.89 286.22 17.89 0.00 0.00 192864.06 11283.59 219745.06 00:22:01.686 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.686 Verification LBA range: start 0x0 length 0x400 00:22:01.686 Nvme9n1 : 0.86 227.05 14.19 0.00 0.00 236578.56 4530.53 216097.84 00:22:01.686 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.686 Verification LBA range: start 0x0 length 0x400 00:22:01.686 Nvme10n1 : 0.87 224.76 14.05 0.00 0.00 233875.71 4644.51 235245.75 00:22:01.686 =================================================================================================================== 00:22:01.686 Total : 2774.47 173.40 0.00 0.00 208926.38 2692.67 235245.75 00:22:01.968 19:14:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 367790 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:02.949 rmmod nvme_tcp 00:22:02.949 rmmod nvme_fabrics 00:22:02.949 rmmod nvme_keyring 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 367790 ']' 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 367790 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 367790 ']' 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 367790 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 367790 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 367790' 00:22:02.949 killing process with pid 367790 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 367790 00:22:02.949 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 367790 00:22:03.552 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:03.552 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:03.552 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:03.552 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.552 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:03.552 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.552 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.552 19:14:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.553 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:05.553 00:22:05.553 real 0m8.203s 00:22:05.553 user 0m25.269s 00:22:05.553 sys 0m1.340s 00:22:05.553 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:05.553 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:05.553 ************************************ 00:22:05.553 END TEST nvmf_shutdown_tc2 00:22:05.553 ************************************ 00:22:05.553 19:14:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:05.553 19:14:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:05.553 19:14:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:05.553 19:14:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.553 19:14:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:05.553 ************************************ 00:22:05.553 START TEST nvmf_shutdown_tc3 00:22:05.553 ************************************ 00:22:05.553 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:05.554 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:05.554 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.554 19:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:05.554 Found net devices under 0000:86:00.0: cvl_0_0 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:05.554 Found net devices under 0000:86:00.1: cvl_0_1 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.554 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:05.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:22:05.814 00:22:05.814 --- 10.0.0.2 ping statistics --- 00:22:05.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.814 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:22:05.814 00:22:05.814 --- 10.0.0.1 ping statistics --- 00:22:05.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.814 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=369367 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 369367 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 369367 ']' 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.814 19:14:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.814 [2024-07-12 19:14:08.347933] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:22:05.814 [2024-07-12 19:14:08.347977] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.814 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.074 [2024-07-12 19:14:08.417021] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.074 [2024-07-12 19:14:08.489863] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.074 [2024-07-12 19:14:08.489903] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.074 [2024-07-12 19:14:08.489910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.074 [2024-07-12 19:14:08.489916] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.074 [2024-07-12 19:14:08.489920] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.074 [2024-07-12 19:14:08.490001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.074 [2024-07-12 19:14:08.490106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.074 [2024-07-12 19:14:08.490214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.074 [2024-07-12 19:14:08.490215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.641 [2024-07-12 19:14:09.189221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.641 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.900 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.900 Malloc1 00:22:06.900 [2024-07-12 19:14:09.284992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.900 Malloc2 00:22:06.900 Malloc3 00:22:06.900 Malloc4 00:22:06.900 Malloc5 00:22:07.159 Malloc6 00:22:07.159 Malloc7 00:22:07.159 Malloc8 00:22:07.159 Malloc9 00:22:07.159 Malloc10 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=369644 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 369644 /var/tmp/bdevperf.sock 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 369644 ']' 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.159 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.159 { 00:22:07.160 "params": { 00:22:07.160 "name": "Nvme$subsystem", 00:22:07.160 "trtype": "$TEST_TRANSPORT", 00:22:07.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.160 "adrfam": "ipv4", 00:22:07.160 "trsvcid": "$NVMF_PORT", 00:22:07.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.160 "hdgst": ${hdgst:-false}, 00:22:07.160 "ddgst": ${ddgst:-false} 00:22:07.160 }, 00:22:07.160 "method": "bdev_nvme_attach_controller" 00:22:07.160 } 00:22:07.160 EOF 00:22:07.160 )") 00:22:07.160 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:07.419 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.419 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.419 { 00:22:07.419 "params": { 00:22:07.419 "name": "Nvme$subsystem", 00:22:07.419 "trtype": "$TEST_TRANSPORT", 00:22:07.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.419 "adrfam": "ipv4", 00:22:07.419 "trsvcid": "$NVMF_PORT", 00:22:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.419 "hdgst": ${hdgst:-false}, 00:22:07.419 "ddgst": ${ddgst:-false} 00:22:07.419 }, 00:22:07.419 "method": "bdev_nvme_attach_controller" 00:22:07.419 } 00:22:07.419 EOF 00:22:07.419 )") 00:22:07.419 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:07.419 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.419 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.419 { 00:22:07.419 "params": { 00:22:07.419 "name": "Nvme$subsystem", 00:22:07.419 "trtype": "$TEST_TRANSPORT", 00:22:07.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.419 "adrfam": "ipv4", 00:22:07.419 "trsvcid": "$NVMF_PORT", 00:22:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.419 "hdgst": ${hdgst:-false}, 00:22:07.419 "ddgst": ${ddgst:-false} 00:22:07.419 }, 00:22:07.419 "method": "bdev_nvme_attach_controller" 00:22:07.419 } 00:22:07.419 EOF 00:22:07.419 )") 00:22:07.419 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:07.419 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.419 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.419 { 00:22:07.419 "params": { 00:22:07.419 "name": "Nvme$subsystem", 00:22:07.419 "trtype": "$TEST_TRANSPORT", 00:22:07.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.419 "adrfam": "ipv4", 00:22:07.419 "trsvcid": "$NVMF_PORT", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.420 "hdgst": ${hdgst:-false}, 00:22:07.420 "ddgst": ${ddgst:-false} 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 } 00:22:07.420 EOF 00:22:07.420 )") 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.420 { 00:22:07.420 "params": { 00:22:07.420 "name": "Nvme$subsystem", 00:22:07.420 "trtype": "$TEST_TRANSPORT", 00:22:07.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.420 "adrfam": "ipv4", 00:22:07.420 "trsvcid": "$NVMF_PORT", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.420 "hdgst": ${hdgst:-false}, 00:22:07.420 "ddgst": ${ddgst:-false} 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 } 00:22:07.420 EOF 00:22:07.420 )") 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.420 { 00:22:07.420 "params": { 00:22:07.420 "name": "Nvme$subsystem", 00:22:07.420 "trtype": "$TEST_TRANSPORT", 00:22:07.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.420 "adrfam": "ipv4", 00:22:07.420 "trsvcid": "$NVMF_PORT", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.420 "hdgst": ${hdgst:-false}, 00:22:07.420 "ddgst": ${ddgst:-false} 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 } 00:22:07.420 EOF 00:22:07.420 )") 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.420 { 00:22:07.420 "params": { 00:22:07.420 "name": "Nvme$subsystem", 00:22:07.420 "trtype": "$TEST_TRANSPORT", 00:22:07.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.420 "adrfam": "ipv4", 00:22:07.420 "trsvcid": "$NVMF_PORT", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.420 "hdgst": ${hdgst:-false}, 00:22:07.420 "ddgst": ${ddgst:-false} 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 } 00:22:07.420 EOF 00:22:07.420 )") 00:22:07.420 [2024-07-12 19:14:09.764316] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:22:07.420 [2024-07-12 19:14:09.764367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369644 ] 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.420 { 00:22:07.420 "params": { 00:22:07.420 "name": "Nvme$subsystem", 00:22:07.420 "trtype": "$TEST_TRANSPORT", 00:22:07.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.420 "adrfam": "ipv4", 00:22:07.420 "trsvcid": "$NVMF_PORT", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.420 "hdgst": ${hdgst:-false}, 00:22:07.420 "ddgst": ${ddgst:-false} 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 } 00:22:07.420 EOF 00:22:07.420 )") 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.420 { 00:22:07.420 "params": { 00:22:07.420 "name": "Nvme$subsystem", 00:22:07.420 "trtype": "$TEST_TRANSPORT", 00:22:07.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.420 "adrfam": "ipv4", 00:22:07.420 "trsvcid": "$NVMF_PORT", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.420 "hdgst": ${hdgst:-false}, 00:22:07.420 "ddgst": ${ddgst:-false} 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 } 00:22:07.420 EOF 00:22:07.420 )") 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.420 { 00:22:07.420 "params": { 00:22:07.420 "name": "Nvme$subsystem", 00:22:07.420 "trtype": "$TEST_TRANSPORT", 00:22:07.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.420 "adrfam": "ipv4", 00:22:07.420 "trsvcid": "$NVMF_PORT", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.420 "hdgst": ${hdgst:-false}, 00:22:07.420 "ddgst": ${ddgst:-false} 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 } 00:22:07.420 EOF 00:22:07.420 )") 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:07.420 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:07.420 19:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:07.420 "params": { 00:22:07.420 "name": "Nvme1", 00:22:07.420 "trtype": "tcp", 00:22:07.420 "traddr": "10.0.0.2", 00:22:07.420 "adrfam": "ipv4", 00:22:07.420 "trsvcid": "4420", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:07.420 "hdgst": false, 00:22:07.420 "ddgst": false 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 },{ 00:22:07.420 "params": { 00:22:07.420 "name": "Nvme2", 00:22:07.420 "trtype": "tcp", 00:22:07.420 "traddr": "10.0.0.2", 00:22:07.420 "adrfam": "ipv4", 00:22:07.420 "trsvcid": "4420", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:07.420 "hdgst": false, 00:22:07.420 "ddgst": false 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 },{ 00:22:07.420 "params": { 00:22:07.420 "name": "Nvme3", 00:22:07.420 "trtype": "tcp", 00:22:07.420 "traddr": "10.0.0.2", 00:22:07.420 "adrfam": "ipv4", 00:22:07.420 "trsvcid": "4420", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:07.420 "hdgst": false, 00:22:07.420 "ddgst": false 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 },{ 00:22:07.420 "params": { 00:22:07.420 "name": "Nvme4", 00:22:07.420 "trtype": "tcp", 00:22:07.420 "traddr": "10.0.0.2", 00:22:07.420 "adrfam": "ipv4", 00:22:07.420 "trsvcid": "4420", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:07.420 "hdgst": false, 00:22:07.420 "ddgst": false 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 },{ 00:22:07.420 "params": { 00:22:07.420 "name": "Nvme5", 00:22:07.420 "trtype": "tcp", 00:22:07.420 "traddr": "10.0.0.2", 00:22:07.420 "adrfam": "ipv4", 00:22:07.420 "trsvcid": "4420", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:07.420 "hdgst": false, 00:22:07.420 "ddgst": false 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 },{ 00:22:07.420 "params": { 00:22:07.420 "name": "Nvme6", 00:22:07.420 "trtype": "tcp", 00:22:07.420 "traddr": "10.0.0.2", 00:22:07.420 "adrfam": "ipv4", 00:22:07.420 "trsvcid": "4420", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:07.420 "hdgst": false, 00:22:07.420 "ddgst": false 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 },{ 00:22:07.420 "params": { 00:22:07.420 "name": "Nvme7", 00:22:07.420 "trtype": "tcp", 00:22:07.420 "traddr": "10.0.0.2", 00:22:07.420 "adrfam": "ipv4", 00:22:07.420 "trsvcid": "4420", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:07.420 "hdgst": false, 00:22:07.420 "ddgst": false 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 },{ 00:22:07.420 "params": { 00:22:07.420 "name": "Nvme8", 00:22:07.420 "trtype": "tcp", 00:22:07.420 "traddr": "10.0.0.2", 00:22:07.420 "adrfam": "ipv4", 00:22:07.420 "trsvcid": "4420", 00:22:07.420 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:07.420 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:07.420 "hdgst": false, 00:22:07.420 "ddgst": false 00:22:07.420 }, 00:22:07.420 "method": "bdev_nvme_attach_controller" 00:22:07.420 },{ 00:22:07.421 "params": { 00:22:07.421 "name": "Nvme9", 00:22:07.421 "trtype": "tcp", 00:22:07.421 "traddr": "10.0.0.2", 00:22:07.421 "adrfam": "ipv4", 00:22:07.421 "trsvcid": "4420", 00:22:07.421 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:07.421 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:07.421 "hdgst": false, 00:22:07.421 "ddgst": false 00:22:07.421 }, 00:22:07.421 "method": "bdev_nvme_attach_controller" 00:22:07.421 },{ 00:22:07.421 "params": { 00:22:07.421 "name": "Nvme10", 00:22:07.421 "trtype": "tcp", 00:22:07.421 "traddr": "10.0.0.2", 00:22:07.421 "adrfam": "ipv4", 00:22:07.421 "trsvcid": "4420", 00:22:07.421 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:07.421 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:07.421 "hdgst": false, 00:22:07.421 "ddgst": false 00:22:07.421 }, 00:22:07.421 "method": "bdev_nvme_attach_controller" 00:22:07.421 }' 00:22:07.421 [2024-07-12 19:14:09.831708] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.421 [2024-07-12 19:14:09.904197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.328 Running I/O for 10 seconds... 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:09.328 19:14:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:09.587 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:09.587 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:09.587 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:09.587 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:09.587 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.587 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.587 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.587 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:09.588 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:09.588 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 369367 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 369367 ']' 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 369367 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 369367 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 369367' 00:22:09.849 killing process with pid 369367 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 369367 00:22:09.849 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 369367 00:22:09.849 [2024-07-12 19:14:12.404390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.404944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa430 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.406382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.406412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.406419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.406426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.406433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.406439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.406446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.406452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.406458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.406464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.406471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.406477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.406482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.849 [2024-07-12 19:14:12.406493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.406801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fce30 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.409684] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:09.850 [2024-07-12 19:14:12.413439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.850 [2024-07-12 19:14:12.413701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:09.851 [2024-07-12 19:14:12.413840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fa8d0 is same with the state(5) to be set 00:22:10.124 [2024-07-12 19:14:12.415374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.124 [2024-07-12 19:14:12.415400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.124 [2024-07-12 19:14:12.415408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.124 [2024-07-12 19:14:12.415415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.124 [2024-07-12 19:14:12.415422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.415705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fad90 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.416998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.417004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.417010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.417016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.417022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.417028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.417034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.417041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.417048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.125 [2024-07-12 19:14:12.417054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb230 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.417995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.126 [2024-07-12 19:14:12.418423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.418432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.418442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.418453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.418462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.418472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.418481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fb6f0 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.419783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fbb90 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.127 [2024-07-12 19:14:12.420880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.420997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc030 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.421995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.128 [2024-07-12 19:14:12.422175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.422275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17fc4d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.427263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19250d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.427377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177c190 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.427455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a8340 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.427539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179db30 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.427615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17961d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.427693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.129 [2024-07-12 19:14:12.427743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.129 [2024-07-12 19:14:12.427750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19258d0 is same with the state(5) to be set 00:22:10.129 [2024-07-12 19:14:12.427771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.427779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.427787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.427793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.427800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.427806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.427813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.427819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.427827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1759c70 is same with the state(5) to be set 00:22:10.130 [2024-07-12 19:14:12.427849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.427857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.427864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.427871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.427878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.427885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.427892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.427898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.427904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190e8b0 is same with the state(5) to be set 00:22:10.130 [2024-07-12 19:14:12.427930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.427938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.427945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.427951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.427958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.427965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.427972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.427978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.427984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192e050 is same with the state(5) to be set 00:22:10.130 [2024-07-12 19:14:12.428005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.428013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.428021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.428028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.428035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.428041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.428048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.130 [2024-07-12 19:14:12.428055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.428060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a0bf0 is same with the state(5) to be set 00:22:10.130 [2024-07-12 19:14:12.431887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.431912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.431928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.431935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.431944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.431952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.431960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.431967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.431979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.431985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.431994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.130 [2024-07-12 19:14:12.432278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.130 [2024-07-12 19:14:12.432285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.131 [2024-07-12 19:14:12.432866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.131 [2024-07-12 19:14:12.432874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1882a60 is same with the state(5) to be set 00:22:10.131 [2024-07-12 19:14:12.432935] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1882a60 was disconnected and freed. reset controller. 00:22:10.131 [2024-07-12 19:14:12.434634] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.131 [2024-07-12 19:14:12.434660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:10.131 [2024-07-12 19:14:12.434677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19250d0 (9): Bad file descriptor 00:22:10.131 [2024-07-12 19:14:12.435466] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.131 [2024-07-12 19:14:12.435520] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.131 [2024-07-12 19:14:12.435557] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.131 [2024-07-12 19:14:12.435599] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.132 [2024-07-12 19:14:12.435639] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.132 [2024-07-12 19:14:12.435686] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.132 [2024-07-12 19:14:12.435729] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.132 [2024-07-12 19:14:12.435951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.132 [2024-07-12 19:14:12.435967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19250d0 with addr=10.0.0.2, port=4420 00:22:10.132 [2024-07-12 19:14:12.435975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19250d0 is same with the state(5) to be set 00:22:10.132 [2024-07-12 19:14:12.436018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.132 [2024-07-12 19:14:12.436608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.132 [2024-07-12 19:14:12.436614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.133 [2024-07-12 19:14:12.436955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.133 [2024-07-12 19:14:12.436962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1753b70 is same with the state(5) to be set 00:22:10.133 [2024-07-12 19:14:12.437025] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1753b70 was disconnected and freed. reset controller. 00:22:10.133 [2024-07-12 19:14:12.437091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19250d0 (9): Bad file descriptor 00:22:10.133 [2024-07-12 19:14:12.438043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:10.133 [2024-07-12 19:14:12.438059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17961d0 (9): Bad file descriptor 00:22:10.133 [2024-07-12 19:14:12.438068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:10.133 [2024-07-12 19:14:12.438076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:10.133 [2024-07-12 19:14:12.438084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:10.133 [2024-07-12 19:14:12.438099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177c190 (9): Bad file descriptor 00:22:10.133 [2024-07-12 19:14:12.438115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a8340 (9): Bad file descriptor 00:22:10.133 [2024-07-12 19:14:12.438128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179db30 (9): Bad file descriptor 00:22:10.133 [2024-07-12 19:14:12.438142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19258d0 (9): Bad file descriptor 00:22:10.133 [2024-07-12 19:14:12.438155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1759c70 (9): Bad file descriptor 00:22:10.133 [2024-07-12 19:14:12.438168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190e8b0 (9): Bad file descriptor 00:22:10.133 [2024-07-12 19:14:12.438184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192e050 (9): Bad file descriptor 00:22:10.133 [2024-07-12 19:14:12.438197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a0bf0 (9): Bad file descriptor 00:22:10.133 [2024-07-12 19:14:12.438273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.133 [2024-07-12 19:14:12.438787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.133 [2024-07-12 19:14:12.438807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17961d0 with addr=10.0.0.2, port=4420 00:22:10.133 [2024-07-12 19:14:12.438815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17961d0 is same with the state(5) to be set 00:22:10.133 [2024-07-12 19:14:12.438857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17961d0 (9): Bad file descriptor 00:22:10.133 [2024-07-12 19:14:12.438898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:10.133 [2024-07-12 19:14:12.438906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:10.133 [2024-07-12 19:14:12.438912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:10.133 [2024-07-12 19:14:12.438949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.133 [2024-07-12 19:14:12.445521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:10.133 [2024-07-12 19:14:12.445786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.133 [2024-07-12 19:14:12.445800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19250d0 with addr=10.0.0.2, port=4420 00:22:10.134 [2024-07-12 19:14:12.445807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19250d0 is same with the state(5) to be set 00:22:10.134 [2024-07-12 19:14:12.445841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19250d0 (9): Bad file descriptor 00:22:10.134 [2024-07-12 19:14:12.445875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:10.134 [2024-07-12 19:14:12.445882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:10.134 [2024-07-12 19:14:12.445889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:10.134 [2024-07-12 19:14:12.445925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.134 [2024-07-12 19:14:12.448205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.134 [2024-07-12 19:14:12.448804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.134 [2024-07-12 19:14:12.448813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.448820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.448828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.448835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.448843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.448849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.448857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.448865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.448872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.448879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.448886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.448893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.448901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.448907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.448915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.448922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.448930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.448936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.448943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.448950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.448958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.448964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.448972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.448978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.448986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.448992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.449000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.449006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.449014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.449020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.449028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.449035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.449044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.449050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.449058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.449065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.449073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.449079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.449087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.449093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.449101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.449107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.449114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.449121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.449129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.449135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.449143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.449150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.449157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189aea0 is same with the state(5) to be set 00:22:10.135 [2024-07-12 19:14:12.450170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.135 [2024-07-12 19:14:12.450454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.135 [2024-07-12 19:14:12.450462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.450985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.450991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.451000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.451007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.451015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.451022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.451030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.451037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.451045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.451051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.451059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.451065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.136 [2024-07-12 19:14:12.451073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.136 [2024-07-12 19:14:12.451080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.451088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.451094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.451102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.451108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.451117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.451123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.451130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1821490 is same with the state(5) to be set 00:22:10.137 [2024-07-12 19:14:12.452128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.137 [2024-07-12 19:14:12.452606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.137 [2024-07-12 19:14:12.452614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.452620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.452628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.452634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.452642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.452649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.452657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.452663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.452671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.452678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.452686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.452692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.452700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.452706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.452715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.452721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.452731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.452738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.452745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.452751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.452758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1822920 is same with the state(5) to be set 00:22:10.138 [2024-07-12 19:14:12.453681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.453991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.453999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.454005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.454013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.454021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.454029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.454036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.454044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.454050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.454058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.454065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.454072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.454079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.454087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.454093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.454101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.454117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.454125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.454132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.454139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.454146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.454155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.454161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.138 [2024-07-12 19:14:12.454170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.138 [2024-07-12 19:14:12.454176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.454643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.454650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755040 is same with the state(5) to be set 00:22:10.139 [2024-07-12 19:14:12.455649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.455662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.455673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.455680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.455688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.455695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.455703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.455710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.455718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.455724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.455733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.455740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.455748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.455754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.455762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.455769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.455780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.455786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.455794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.455801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.139 [2024-07-12 19:14:12.455809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.139 [2024-07-12 19:14:12.455816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.455824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.455830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.455839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.455845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.455853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.455860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.455868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.455874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.455882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.455888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.455896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.455902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.455910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.455916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.455925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.455931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.455938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.455945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.455952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.455960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.455968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.455975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.455982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.455989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.455997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.140 [2024-07-12 19:14:12.456423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.140 [2024-07-12 19:14:12.456431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.456438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.456446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.456452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.456460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.456466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.456475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.456481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.456489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.456495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.456504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.456511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.456519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.456525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.456533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.456540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.456548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.456554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.456562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.456568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.456576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.456582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.456589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189b910 is same with the state(5) to be set 00:22:10.141 [2024-07-12 19:14:12.457607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.457986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.457994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.458000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.458008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.458014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.458022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.458028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.141 [2024-07-12 19:14:12.458036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.141 [2024-07-12 19:14:12.458042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.458535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.458543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189cde0 is same with the state(5) to be set 00:22:10.142 [2024-07-12 19:14:12.459538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.459549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.459560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.459566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.142 [2024-07-12 19:14:12.459575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.142 [2024-07-12 19:14:12.459581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.459991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.459999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.460005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.460014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.460020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.460028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.460035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.460043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.460050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.460058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.460064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.460073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.460079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.460087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.460094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.460102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.460108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.460116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.460123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.460131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.460137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.460145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.460152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.460160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.460167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.460176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.460184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.460192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.143 [2024-07-12 19:14:12.460198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.143 [2024-07-12 19:14:12.460207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.460422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.460430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.464865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.464879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.464887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.464895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.464902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.464909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.464916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.464924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.464930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.464937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189e2b0 is same with the state(5) to be set 00:22:10.144 [2024-07-12 19:14:12.466257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.144 [2024-07-12 19:14:12.466598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.144 [2024-07-12 19:14:12.466604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.466989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.466996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.467002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.467011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.467017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.467025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.467033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.467041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.467047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.467055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.467061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.467069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.467075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.467083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.467089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.467097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.467103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.467111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.467117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.467125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.467131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.467139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.467146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.467153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.467160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.467168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.467174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.467182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.145 [2024-07-12 19:14:12.467188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.145 [2024-07-12 19:14:12.467195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1883ef0 is same with the state(5) to be set 00:22:10.145 [2024-07-12 19:14:12.468494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:10.145 [2024-07-12 19:14:12.468515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:10.146 [2024-07-12 19:14:12.468527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:10.146 [2024-07-12 19:14:12.468536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:10.146 [2024-07-12 19:14:12.468600] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:10.146 [2024-07-12 19:14:12.468612] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:10.146 [2024-07-12 19:14:12.468625] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:10.146 [2024-07-12 19:14:12.468634] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:10.146 [2024-07-12 19:14:12.468712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:10.146 [2024-07-12 19:14:12.468722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:10.146 [2024-07-12 19:14:12.468729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:10.146 task offset: 24576 on job bdev=Nvme9n1 fails 00:22:10.146 00:22:10.146 Latency(us) 00:22:10.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.146 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.146 Job: Nvme1n1 ended in about 0.90 seconds with error 00:22:10.146 Verification LBA range: start 0x0 length 0x400 00:22:10.146 Nvme1n1 : 0.90 212.68 13.29 70.89 0.00 223462.18 19261.89 230686.72 00:22:10.146 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.146 Job: Nvme2n1 ended in about 0.90 seconds with error 00:22:10.146 Verification LBA range: start 0x0 length 0x400 00:22:10.146 Nvme2n1 : 0.90 216.64 13.54 70.74 0.00 216584.64 16982.37 215186.03 00:22:10.146 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.146 Job: Nvme3n1 ended in about 0.91 seconds with error 00:22:10.146 Verification LBA range: start 0x0 length 0x400 00:22:10.146 Nvme3n1 : 0.91 236.13 14.76 46.34 0.00 214906.66 24846.69 209715.20 00:22:10.146 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.146 Job: Nvme4n1 ended in about 0.89 seconds with error 00:22:10.146 Verification LBA range: start 0x0 length 0x400 00:22:10.146 Nvme4n1 : 0.89 239.12 14.95 71.85 0.00 192800.77 5698.78 216097.84 00:22:10.146 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.146 Job: Nvme5n1 ended in about 0.91 seconds with error 00:22:10.146 Verification LBA range: start 0x0 length 0x400 00:22:10.146 Nvme5n1 : 0.91 211.40 13.21 70.47 0.00 209071.42 16640.45 219745.06 00:22:10.146 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.146 Job: Nvme6n1 ended in about 0.91 seconds with error 00:22:10.146 Verification LBA range: start 0x0 length 0x400 00:22:10.146 Nvme6n1 : 0.91 210.95 13.18 70.32 0.00 205566.44 15956.59 203332.56 00:22:10.146 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.146 Job: Nvme7n1 ended in about 0.91 seconds with error 00:22:10.146 Verification LBA range: start 0x0 length 0x400 00:22:10.146 Nvme7n1 : 0.91 210.50 13.16 70.17 0.00 202119.35 17096.35 203332.56 00:22:10.146 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.146 Job: Nvme8n1 ended in about 0.92 seconds with error 00:22:10.146 Verification LBA range: start 0x0 length 0x400 00:22:10.146 Nvme8n1 : 0.92 214.48 13.40 69.68 0.00 196008.65 7636.37 218833.25 00:22:10.146 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.146 Job: Nvme9n1 ended in about 0.89 seconds with error 00:22:10.146 Verification LBA range: start 0x0 length 0x400 00:22:10.146 Nvme9n1 : 0.89 216.47 13.53 72.16 0.00 188094.33 6468.12 217009.64 00:22:10.146 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.146 Job: Nvme10n1 ended in about 0.92 seconds with error 00:22:10.146 Verification LBA range: start 0x0 length 0x400 00:22:10.146 Nvme10n1 : 0.92 145.53 9.10 69.51 0.00 249196.83 18008.15 244363.80 00:22:10.146 =================================================================================================================== 00:22:10.146 Total : 2113.89 132.12 682.11 0.00 208720.96 5698.78 244363.80 00:22:10.146 [2024-07-12 19:14:12.491327] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:10.146 [2024-07-12 19:14:12.491365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:10.146 [2024-07-12 19:14:12.491672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.146 [2024-07-12 19:14:12.491688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1759c70 with addr=10.0.0.2, port=4420 00:22:10.146 [2024-07-12 19:14:12.491698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1759c70 is same with the state(5) to be set 00:22:10.146 [2024-07-12 19:14:12.491840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.146 [2024-07-12 19:14:12.491850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19258d0 with addr=10.0.0.2, port=4420 00:22:10.146 [2024-07-12 19:14:12.491857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19258d0 is same with the state(5) to be set 00:22:10.146 [2024-07-12 19:14:12.491980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.146 [2024-07-12 19:14:12.491990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x192e050 with addr=10.0.0.2, port=4420 00:22:10.146 [2024-07-12 19:14:12.491997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192e050 is same with the state(5) to be set 00:22:10.146 [2024-07-12 19:14:12.492120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.146 [2024-07-12 19:14:12.492129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a8340 with addr=10.0.0.2, port=4420 00:22:10.146 [2024-07-12 19:14:12.492136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a8340 is same with the state(5) to be set 00:22:10.146 [2024-07-12 19:14:12.494191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.146 [2024-07-12 19:14:12.494212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x179db30 with addr=10.0.0.2, port=4420 00:22:10.146 [2024-07-12 19:14:12.494220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179db30 is same with the state(5) to be set 00:22:10.146 [2024-07-12 19:14:12.494448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.146 [2024-07-12 19:14:12.494458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x177c190 with addr=10.0.0.2, port=4420 00:22:10.146 [2024-07-12 19:14:12.494465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177c190 is same with the state(5) to be set 00:22:10.146 [2024-07-12 19:14:12.494603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.146 [2024-07-12 19:14:12.494613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a0bf0 with addr=10.0.0.2, port=4420 00:22:10.146 [2024-07-12 19:14:12.494619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a0bf0 is same with the state(5) to be set 00:22:10.146 [2024-07-12 19:14:12.494758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.146 [2024-07-12 19:14:12.494769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190e8b0 with addr=10.0.0.2, port=4420 00:22:10.146 [2024-07-12 19:14:12.494775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190e8b0 is same with the state(5) to be set 00:22:10.146 [2024-07-12 19:14:12.494794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1759c70 (9): Bad file descriptor 00:22:10.146 [2024-07-12 19:14:12.494806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19258d0 (9): Bad file descriptor 00:22:10.146 [2024-07-12 19:14:12.494814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192e050 (9): Bad file descriptor 00:22:10.146 [2024-07-12 19:14:12.494822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a8340 (9): Bad file descriptor 00:22:10.146 [2024-07-12 19:14:12.494851] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:10.146 [2024-07-12 19:14:12.494863] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:10.146 [2024-07-12 19:14:12.494876] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:10.146 [2024-07-12 19:14:12.494885] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:10.146 [2024-07-12 19:14:12.494894] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:10.146 [2024-07-12 19:14:12.494903] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:10.146 [2024-07-12 19:14:12.494972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:10.146 [2024-07-12 19:14:12.494982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:10.146 [2024-07-12 19:14:12.495015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179db30 (9): Bad file descriptor 00:22:10.146 [2024-07-12 19:14:12.495025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177c190 (9): Bad file descriptor 00:22:10.146 [2024-07-12 19:14:12.495033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a0bf0 (9): Bad file descriptor 00:22:10.146 [2024-07-12 19:14:12.495041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190e8b0 (9): Bad file descriptor 00:22:10.146 [2024-07-12 19:14:12.495049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:10.146 [2024-07-12 19:14:12.495055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:10.146 [2024-07-12 19:14:12.495062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:10.147 [2024-07-12 19:14:12.495073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:10.147 [2024-07-12 19:14:12.495079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:10.147 [2024-07-12 19:14:12.495085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:10.147 [2024-07-12 19:14:12.495093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:10.147 [2024-07-12 19:14:12.495099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:10.147 [2024-07-12 19:14:12.495105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:10.147 [2024-07-12 19:14:12.495114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:10.147 [2024-07-12 19:14:12.495119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:10.147 [2024-07-12 19:14:12.495125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:10.147 [2024-07-12 19:14:12.495199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.147 [2024-07-12 19:14:12.495207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.147 [2024-07-12 19:14:12.495216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.147 [2024-07-12 19:14:12.495221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.147 [2024-07-12 19:14:12.495472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.147 [2024-07-12 19:14:12.495483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17961d0 with addr=10.0.0.2, port=4420 00:22:10.147 [2024-07-12 19:14:12.495490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17961d0 is same with the state(5) to be set 00:22:10.147 [2024-07-12 19:14:12.495709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.147 [2024-07-12 19:14:12.495720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19250d0 with addr=10.0.0.2, port=4420 00:22:10.147 [2024-07-12 19:14:12.495727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19250d0 is same with the state(5) to be set 00:22:10.147 [2024-07-12 19:14:12.495733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:10.147 [2024-07-12 19:14:12.495739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:10.147 [2024-07-12 19:14:12.495745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:10.147 [2024-07-12 19:14:12.495753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:10.147 [2024-07-12 19:14:12.495759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:10.147 [2024-07-12 19:14:12.495764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:10.147 [2024-07-12 19:14:12.495772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:10.147 [2024-07-12 19:14:12.495778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:10.147 [2024-07-12 19:14:12.495784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:10.147 [2024-07-12 19:14:12.495792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:10.147 [2024-07-12 19:14:12.495798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:10.147 [2024-07-12 19:14:12.495803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:10.147 [2024-07-12 19:14:12.495829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.147 [2024-07-12 19:14:12.495837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.147 [2024-07-12 19:14:12.495842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.147 [2024-07-12 19:14:12.495847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.147 [2024-07-12 19:14:12.495855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17961d0 (9): Bad file descriptor 00:22:10.147 [2024-07-12 19:14:12.495863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19250d0 (9): Bad file descriptor 00:22:10.147 [2024-07-12 19:14:12.495888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:10.147 [2024-07-12 19:14:12.495895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:10.147 [2024-07-12 19:14:12.495900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:10.147 [2024-07-12 19:14:12.495908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:10.147 [2024-07-12 19:14:12.495914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:10.147 [2024-07-12 19:14:12.495922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:10.147 [2024-07-12 19:14:12.495947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.147 [2024-07-12 19:14:12.495954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.406 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:10.406 19:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:11.342 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 369644 00:22:11.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (369644) - No such process 00:22:11.342 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:11.342 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:11.342 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:11.342 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:11.342 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:11.342 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:11.342 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:11.342 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:11.342 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:11.342 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:11.342 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:11.342 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:11.342 rmmod nvme_tcp 00:22:11.342 rmmod nvme_fabrics 00:22:11.342 rmmod nvme_keyring 00:22:11.601 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:11.601 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:11.601 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:11.601 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:11.601 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:11.601 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:11.601 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:11.601 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.601 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:11.601 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.601 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.601 19:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.509 19:14:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:13.509 00:22:13.509 real 0m8.004s 00:22:13.509 user 0m20.212s 00:22:13.509 sys 0m1.256s 00:22:13.509 19:14:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:13.509 19:14:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:13.509 ************************************ 00:22:13.509 END TEST nvmf_shutdown_tc3 00:22:13.509 ************************************ 00:22:13.509 19:14:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:13.509 19:14:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:13.509 00:22:13.509 real 0m31.565s 00:22:13.509 user 1m18.768s 00:22:13.509 sys 0m8.463s 00:22:13.509 19:14:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:13.509 19:14:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:13.509 ************************************ 00:22:13.509 END TEST nvmf_shutdown 00:22:13.509 ************************************ 00:22:13.509 19:14:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:13.509 19:14:16 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:22:13.509 19:14:16 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:13.509 19:14:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.769 19:14:16 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:22:13.769 19:14:16 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:13.769 19:14:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.769 19:14:16 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:22:13.769 19:14:16 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:13.769 19:14:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:13.769 19:14:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.769 19:14:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.769 ************************************ 00:22:13.769 START TEST nvmf_multicontroller 00:22:13.769 ************************************ 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:13.769 * Looking for test storage... 00:22:13.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.769 19:14:16 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:13.770 19:14:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:20.340 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:20.340 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:20.340 Found net devices under 0000:86:00.0: cvl_0_0 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:20.340 Found net devices under 0000:86:00.1: cvl_0_1 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:20.340 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:20.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:20.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:22:20.340 00:22:20.340 --- 10.0.0.2 ping statistics --- 00:22:20.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.341 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:22:20.341 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:20.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:20.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:22:20.341 00:22:20.341 --- 10.0.0.1 ping statistics --- 00:22:20.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.341 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:22:20.341 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:20.341 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:20.341 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:20.341 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:20.341 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:20.341 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:20.341 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:20.341 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:20.341 19:14:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=373808 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 373808 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 373808 ']' 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.341 [2024-07-12 19:14:22.064967] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:22:20.341 [2024-07-12 19:14:22.065014] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.341 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.341 [2024-07-12 19:14:22.136893] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:20.341 [2024-07-12 19:14:22.217223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.341 [2024-07-12 19:14:22.217262] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.341 [2024-07-12 19:14:22.217269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.341 [2024-07-12 19:14:22.217274] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.341 [2024-07-12 19:14:22.217279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.341 [2024-07-12 19:14:22.217352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.341 [2024-07-12 19:14:22.217460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:20.341 [2024-07-12 19:14:22.217459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:20.341 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.600 [2024-07-12 19:14:22.922020] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.600 Malloc0 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.600 [2024-07-12 19:14:22.983109] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.600 [2024-07-12 19:14:22.991048] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.600 19:14:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.600 Malloc1 00:22:20.600 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.600 19:14:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:20.600 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.600 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.600 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.600 19:14:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:20.600 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.600 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.600 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.600 19:14:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:20.600 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.600 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.600 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.601 19:14:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:20.601 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.601 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.601 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.601 19:14:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=373947 00:22:20.601 19:14:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:20.601 19:14:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:20.601 19:14:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 373947 /var/tmp/bdevperf.sock 00:22:20.601 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 373947 ']' 00:22:20.601 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.601 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:20.601 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.601 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:20.601 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.536 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:21.536 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:22:21.536 19:14:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:21.536 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.536 19:14:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.536 NVMe0n1 00:22:21.536 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.536 19:14:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:21.536 19:14:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:21.536 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.536 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.536 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.536 1 00:22:21.536 19:14:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:21.536 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:21.536 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:21.536 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.537 request: 00:22:21.537 { 00:22:21.537 "name": "NVMe0", 00:22:21.537 "trtype": "tcp", 00:22:21.537 "traddr": "10.0.0.2", 00:22:21.537 "adrfam": "ipv4", 00:22:21.537 "trsvcid": "4420", 00:22:21.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.537 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:21.537 "hostaddr": "10.0.0.2", 00:22:21.537 "hostsvcid": "60000", 00:22:21.537 "prchk_reftag": false, 00:22:21.537 "prchk_guard": false, 00:22:21.537 "hdgst": false, 00:22:21.537 "ddgst": false, 00:22:21.537 "method": "bdev_nvme_attach_controller", 00:22:21.537 "req_id": 1 00:22:21.537 } 00:22:21.537 Got JSON-RPC error response 00:22:21.537 response: 00:22:21.537 { 00:22:21.537 "code": -114, 00:22:21.537 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:21.537 } 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.537 request: 00:22:21.537 { 00:22:21.537 "name": "NVMe0", 00:22:21.537 "trtype": "tcp", 00:22:21.537 "traddr": "10.0.0.2", 00:22:21.537 "adrfam": "ipv4", 00:22:21.537 "trsvcid": "4420", 00:22:21.537 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:21.537 "hostaddr": "10.0.0.2", 00:22:21.537 "hostsvcid": "60000", 00:22:21.537 "prchk_reftag": false, 00:22:21.537 "prchk_guard": false, 00:22:21.537 "hdgst": false, 00:22:21.537 "ddgst": false, 00:22:21.537 "method": "bdev_nvme_attach_controller", 00:22:21.537 "req_id": 1 00:22:21.537 } 00:22:21.537 Got JSON-RPC error response 00:22:21.537 response: 00:22:21.537 { 00:22:21.537 "code": -114, 00:22:21.537 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:21.537 } 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.537 request: 00:22:21.537 { 00:22:21.537 "name": "NVMe0", 00:22:21.537 "trtype": "tcp", 00:22:21.537 "traddr": "10.0.0.2", 00:22:21.537 "adrfam": "ipv4", 00:22:21.537 "trsvcid": "4420", 00:22:21.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.537 "hostaddr": "10.0.0.2", 00:22:21.537 "hostsvcid": "60000", 00:22:21.537 "prchk_reftag": false, 00:22:21.537 "prchk_guard": false, 00:22:21.537 "hdgst": false, 00:22:21.537 "ddgst": false, 00:22:21.537 "multipath": "disable", 00:22:21.537 "method": "bdev_nvme_attach_controller", 00:22:21.537 "req_id": 1 00:22:21.537 } 00:22:21.537 Got JSON-RPC error response 00:22:21.537 response: 00:22:21.537 { 00:22:21.537 "code": -114, 00:22:21.537 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:21.537 } 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.537 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.797 request: 00:22:21.797 { 00:22:21.797 "name": "NVMe0", 00:22:21.797 "trtype": "tcp", 00:22:21.797 "traddr": "10.0.0.2", 00:22:21.797 "adrfam": "ipv4", 00:22:21.797 "trsvcid": "4420", 00:22:21.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.797 "hostaddr": "10.0.0.2", 00:22:21.797 "hostsvcid": "60000", 00:22:21.797 "prchk_reftag": false, 00:22:21.797 "prchk_guard": false, 00:22:21.797 "hdgst": false, 00:22:21.797 "ddgst": false, 00:22:21.797 "multipath": "failover", 00:22:21.797 "method": "bdev_nvme_attach_controller", 00:22:21.797 "req_id": 1 00:22:21.797 } 00:22:21.797 Got JSON-RPC error response 00:22:21.797 response: 00:22:21.797 { 00:22:21.797 "code": -114, 00:22:21.797 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:21.797 } 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.797 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.797 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.056 00:22:22.056 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.056 19:14:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:22.056 19:14:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:22.056 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.056 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.056 19:14:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.056 19:14:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:22.056 19:14:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:23.435 0 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 373947 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 373947 ']' 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 373947 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 373947 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 373947' 00:22:23.435 killing process with pid 373947 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 373947 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 373947 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:22:23.435 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:23.435 [2024-07-12 19:14:23.089456] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:22:23.435 [2024-07-12 19:14:23.089505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373947 ] 00:22:23.435 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.435 [2024-07-12 19:14:23.156631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.435 [2024-07-12 19:14:23.237229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.435 [2024-07-12 19:14:24.551806] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 4a7f2e6c-b18b-45bd-88ec-a96bfe6af274 already exists 00:22:23.435 [2024-07-12 19:14:24.551836] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:4a7f2e6c-b18b-45bd-88ec-a96bfe6af274 alias for bdev NVMe1n1 00:22:23.435 [2024-07-12 19:14:24.551844] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:23.435 Running I/O for 1 seconds... 00:22:23.435 00:22:23.435 Latency(us) 00:22:23.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.435 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:23.435 NVMe0n1 : 1.00 24688.86 96.44 0.00 0.00 5177.55 2706.92 9061.06 00:22:23.435 =================================================================================================================== 00:22:23.435 Total : 24688.86 96.44 0.00 0.00 5177.55 2706.92 9061.06 00:22:23.435 Received shutdown signal, test time was about 1.000000 seconds 00:22:23.435 00:22:23.435 Latency(us) 00:22:23.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.435 =================================================================================================================== 00:22:23.435 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.435 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.435 19:14:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.435 rmmod nvme_tcp 00:22:23.435 rmmod nvme_fabrics 00:22:23.695 rmmod nvme_keyring 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 373808 ']' 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 373808 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 373808 ']' 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 373808 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 373808 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 373808' 00:22:23.695 killing process with pid 373808 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 373808 00:22:23.695 19:14:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 373808 00:22:23.955 19:14:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.955 19:14:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.955 19:14:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.955 19:14:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.955 19:14:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.955 19:14:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.955 19:14:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.955 19:14:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.861 19:14:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:25.861 00:22:25.861 real 0m12.233s 00:22:25.861 user 0m17.051s 00:22:25.861 sys 0m5.077s 00:22:25.861 19:14:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:25.861 19:14:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:25.861 ************************************ 00:22:25.861 END TEST nvmf_multicontroller 00:22:25.861 ************************************ 00:22:25.861 19:14:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:25.862 19:14:28 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:25.862 19:14:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:25.862 19:14:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:25.862 19:14:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:26.122 ************************************ 00:22:26.122 START TEST nvmf_aer 00:22:26.122 ************************************ 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:26.122 * Looking for test storage... 00:22:26.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.122 19:14:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:32.692 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:32.692 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:32.692 Found net devices under 0000:86:00.0: cvl_0_0 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:32.692 Found net devices under 0000:86:00.1: cvl_0_1 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.692 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:32.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:22:32.693 00:22:32.693 --- 10.0.0.2 ping statistics --- 00:22:32.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.693 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:22:32.693 00:22:32.693 --- 10.0.0.1 ping statistics --- 00:22:32.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.693 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=377932 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 377932 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 377932 ']' 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:32.693 19:14:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:32.693 [2024-07-12 19:14:34.358001] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:22:32.693 [2024-07-12 19:14:34.358048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.693 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.693 [2024-07-12 19:14:34.430622] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.693 [2024-07-12 19:14:34.511489] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.693 [2024-07-12 19:14:34.511523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.693 [2024-07-12 19:14:34.511529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.693 [2024-07-12 19:14:34.511535] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.693 [2024-07-12 19:14:34.511540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.693 [2024-07-12 19:14:34.511600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.693 [2024-07-12 19:14:34.511641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.693 [2024-07-12 19:14:34.511745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.693 [2024-07-12 19:14:34.511746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:32.693 [2024-07-12 19:14:35.211214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:32.693 Malloc0 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.693 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:32.952 [2024-07-12 19:14:35.262822] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:32.952 [ 00:22:32.952 { 00:22:32.952 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:32.952 "subtype": "Discovery", 00:22:32.952 "listen_addresses": [], 00:22:32.952 "allow_any_host": true, 00:22:32.952 "hosts": [] 00:22:32.952 }, 00:22:32.952 { 00:22:32.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.952 "subtype": "NVMe", 00:22:32.952 "listen_addresses": [ 00:22:32.952 { 00:22:32.952 "trtype": "TCP", 00:22:32.952 "adrfam": "IPv4", 00:22:32.952 "traddr": "10.0.0.2", 00:22:32.952 "trsvcid": "4420" 00:22:32.952 } 00:22:32.952 ], 00:22:32.952 "allow_any_host": true, 00:22:32.952 "hosts": [], 00:22:32.952 "serial_number": "SPDK00000000000001", 00:22:32.952 "model_number": "SPDK bdev Controller", 00:22:32.952 "max_namespaces": 2, 00:22:32.952 "min_cntlid": 1, 00:22:32.952 "max_cntlid": 65519, 00:22:32.952 "namespaces": [ 00:22:32.952 { 00:22:32.952 "nsid": 1, 00:22:32.952 "bdev_name": "Malloc0", 00:22:32.952 "name": "Malloc0", 00:22:32.952 "nguid": "2796221B6597405AB393AD912321ED7A", 00:22:32.952 "uuid": "2796221b-6597-405a-b393-ad912321ed7a" 00:22:32.952 } 00:22:32.952 ] 00:22:32.952 } 00:22:32.952 ] 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=378184 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:32.952 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.952 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:33.212 Malloc1 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:33.212 Asynchronous Event Request test 00:22:33.212 Attaching to 10.0.0.2 00:22:33.212 Attached to 10.0.0.2 00:22:33.212 Registering asynchronous event callbacks... 00:22:33.212 Starting namespace attribute notice tests for all controllers... 00:22:33.212 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:33.212 aer_cb - Changed Namespace 00:22:33.212 Cleaning up... 00:22:33.212 [ 00:22:33.212 { 00:22:33.212 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:33.212 "subtype": "Discovery", 00:22:33.212 "listen_addresses": [], 00:22:33.212 "allow_any_host": true, 00:22:33.212 "hosts": [] 00:22:33.212 }, 00:22:33.212 { 00:22:33.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.212 "subtype": "NVMe", 00:22:33.212 "listen_addresses": [ 00:22:33.212 { 00:22:33.212 "trtype": "TCP", 00:22:33.212 "adrfam": "IPv4", 00:22:33.212 "traddr": "10.0.0.2", 00:22:33.212 "trsvcid": "4420" 00:22:33.212 } 00:22:33.212 ], 00:22:33.212 "allow_any_host": true, 00:22:33.212 "hosts": [], 00:22:33.212 "serial_number": "SPDK00000000000001", 00:22:33.212 "model_number": "SPDK bdev Controller", 00:22:33.212 "max_namespaces": 2, 00:22:33.212 "min_cntlid": 1, 00:22:33.212 "max_cntlid": 65519, 00:22:33.212 "namespaces": [ 00:22:33.212 { 00:22:33.212 "nsid": 1, 00:22:33.212 "bdev_name": "Malloc0", 00:22:33.212 "name": "Malloc0", 00:22:33.212 "nguid": "2796221B6597405AB393AD912321ED7A", 00:22:33.212 "uuid": "2796221b-6597-405a-b393-ad912321ed7a" 00:22:33.212 }, 00:22:33.212 { 00:22:33.212 "nsid": 2, 00:22:33.212 "bdev_name": "Malloc1", 00:22:33.212 "name": "Malloc1", 00:22:33.212 "nguid": "771E0DEF74F64444B56166A1279C0FE3", 00:22:33.212 "uuid": "771e0def-74f6-4444-b561-66a1279c0fe3" 00:22:33.212 } 00:22:33.212 ] 00:22:33.212 } 00:22:33.212 ] 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 378184 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.212 rmmod nvme_tcp 00:22:33.212 rmmod nvme_fabrics 00:22:33.212 rmmod nvme_keyring 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 377932 ']' 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 377932 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 377932 ']' 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 377932 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 377932 00:22:33.212 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:33.213 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:33.213 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 377932' 00:22:33.213 killing process with pid 377932 00:22:33.213 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 377932 00:22:33.213 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 377932 00:22:33.472 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:33.472 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:33.472 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:33.472 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.472 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:33.472 19:14:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.472 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.472 19:14:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.007 19:14:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:36.007 00:22:36.007 real 0m9.554s 00:22:36.007 user 0m7.408s 00:22:36.007 sys 0m4.696s 00:22:36.007 19:14:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:36.007 19:14:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:36.007 ************************************ 00:22:36.007 END TEST nvmf_aer 00:22:36.007 ************************************ 00:22:36.007 19:14:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:36.007 19:14:38 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:36.007 19:14:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:36.007 19:14:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:36.007 19:14:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:36.007 ************************************ 00:22:36.007 START TEST nvmf_async_init 00:22:36.007 ************************************ 00:22:36.007 19:14:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:36.007 * Looking for test storage... 00:22:36.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:36.007 19:14:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.007 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:36.007 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.007 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.007 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.007 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.007 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.007 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.007 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.007 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.007 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=dc6717a9064840eb81c8eb2602ab85d4 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:36.008 19:14:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:41.280 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:41.280 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:41.280 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:41.281 Found net devices under 0000:86:00.0: cvl_0_0 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:41.281 Found net devices under 0000:86:00.1: cvl_0_1 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:41.281 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:41.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:22:41.538 00:22:41.538 --- 10.0.0.2 ping statistics --- 00:22:41.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.538 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:22:41.538 00:22:41.538 --- 10.0.0.1 ping statistics --- 00:22:41.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.538 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.538 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=381721 00:22:41.539 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 381721 00:22:41.539 19:14:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:41.539 19:14:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 381721 ']' 00:22:41.539 19:14:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.539 19:14:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.539 19:14:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.539 19:14:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.539 19:14:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.539 [2024-07-12 19:14:43.981609] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:22:41.539 [2024-07-12 19:14:43.981652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.539 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.539 [2024-07-12 19:14:44.049677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.796 [2024-07-12 19:14:44.122037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.796 [2024-07-12 19:14:44.122073] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.796 [2024-07-12 19:14:44.122079] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.796 [2024-07-12 19:14:44.122085] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.796 [2024-07-12 19:14:44.122090] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.796 [2024-07-12 19:14:44.122125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.366 [2024-07-12 19:14:44.829325] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.366 null0 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g dc6717a9064840eb81c8eb2602ab85d4 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.366 [2024-07-12 19:14:44.873543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.366 19:14:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.625 nvme0n1 00:22:42.625 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.625 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:42.625 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.625 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.625 [ 00:22:42.625 { 00:22:42.625 "name": "nvme0n1", 00:22:42.625 "aliases": [ 00:22:42.625 "dc6717a9-0648-40eb-81c8-eb2602ab85d4" 00:22:42.625 ], 00:22:42.625 "product_name": "NVMe disk", 00:22:42.625 "block_size": 512, 00:22:42.625 "num_blocks": 2097152, 00:22:42.625 "uuid": "dc6717a9-0648-40eb-81c8-eb2602ab85d4", 00:22:42.625 "assigned_rate_limits": { 00:22:42.625 "rw_ios_per_sec": 0, 00:22:42.625 "rw_mbytes_per_sec": 0, 00:22:42.625 "r_mbytes_per_sec": 0, 00:22:42.625 "w_mbytes_per_sec": 0 00:22:42.625 }, 00:22:42.625 "claimed": false, 00:22:42.625 "zoned": false, 00:22:42.625 "supported_io_types": { 00:22:42.625 "read": true, 00:22:42.625 "write": true, 00:22:42.625 "unmap": false, 00:22:42.625 "flush": true, 00:22:42.625 "reset": true, 00:22:42.625 "nvme_admin": true, 00:22:42.625 "nvme_io": true, 00:22:42.625 "nvme_io_md": false, 00:22:42.625 "write_zeroes": true, 00:22:42.625 "zcopy": false, 00:22:42.625 "get_zone_info": false, 00:22:42.625 "zone_management": false, 00:22:42.625 "zone_append": false, 00:22:42.625 "compare": true, 00:22:42.625 "compare_and_write": true, 00:22:42.625 "abort": true, 00:22:42.625 "seek_hole": false, 00:22:42.625 "seek_data": false, 00:22:42.625 "copy": true, 00:22:42.625 "nvme_iov_md": false 00:22:42.625 }, 00:22:42.625 "memory_domains": [ 00:22:42.625 { 00:22:42.625 "dma_device_id": "system", 00:22:42.625 "dma_device_type": 1 00:22:42.625 } 00:22:42.625 ], 00:22:42.625 "driver_specific": { 00:22:42.625 "nvme": [ 00:22:42.625 { 00:22:42.625 "trid": { 00:22:42.625 "trtype": "TCP", 00:22:42.625 "adrfam": "IPv4", 00:22:42.625 "traddr": "10.0.0.2", 00:22:42.625 "trsvcid": "4420", 00:22:42.625 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:42.625 }, 00:22:42.625 "ctrlr_data": { 00:22:42.625 "cntlid": 1, 00:22:42.625 "vendor_id": "0x8086", 00:22:42.626 "model_number": "SPDK bdev Controller", 00:22:42.626 "serial_number": "00000000000000000000", 00:22:42.626 "firmware_revision": "24.09", 00:22:42.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:42.626 "oacs": { 00:22:42.626 "security": 0, 00:22:42.626 "format": 0, 00:22:42.626 "firmware": 0, 00:22:42.626 "ns_manage": 0 00:22:42.626 }, 00:22:42.626 "multi_ctrlr": true, 00:22:42.626 "ana_reporting": false 00:22:42.626 }, 00:22:42.626 "vs": { 00:22:42.626 "nvme_version": "1.3" 00:22:42.626 }, 00:22:42.626 "ns_data": { 00:22:42.626 "id": 1, 00:22:42.626 "can_share": true 00:22:42.626 } 00:22:42.626 } 00:22:42.626 ], 00:22:42.626 "mp_policy": "active_passive" 00:22:42.626 } 00:22:42.626 } 00:22:42.626 ] 00:22:42.626 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.626 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:42.626 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.626 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.626 [2024-07-12 19:14:45.138081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.626 [2024-07-12 19:14:45.138138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cb250 (9): Bad file descriptor 00:22:42.883 [2024-07-12 19:14:45.270302] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.883 [ 00:22:42.883 { 00:22:42.883 "name": "nvme0n1", 00:22:42.883 "aliases": [ 00:22:42.883 "dc6717a9-0648-40eb-81c8-eb2602ab85d4" 00:22:42.883 ], 00:22:42.883 "product_name": "NVMe disk", 00:22:42.883 "block_size": 512, 00:22:42.883 "num_blocks": 2097152, 00:22:42.883 "uuid": "dc6717a9-0648-40eb-81c8-eb2602ab85d4", 00:22:42.883 "assigned_rate_limits": { 00:22:42.883 "rw_ios_per_sec": 0, 00:22:42.883 "rw_mbytes_per_sec": 0, 00:22:42.883 "r_mbytes_per_sec": 0, 00:22:42.883 "w_mbytes_per_sec": 0 00:22:42.883 }, 00:22:42.883 "claimed": false, 00:22:42.883 "zoned": false, 00:22:42.883 "supported_io_types": { 00:22:42.883 "read": true, 00:22:42.883 "write": true, 00:22:42.883 "unmap": false, 00:22:42.883 "flush": true, 00:22:42.883 "reset": true, 00:22:42.883 "nvme_admin": true, 00:22:42.883 "nvme_io": true, 00:22:42.883 "nvme_io_md": false, 00:22:42.883 "write_zeroes": true, 00:22:42.883 "zcopy": false, 00:22:42.883 "get_zone_info": false, 00:22:42.883 "zone_management": false, 00:22:42.883 "zone_append": false, 00:22:42.883 "compare": true, 00:22:42.883 "compare_and_write": true, 00:22:42.883 "abort": true, 00:22:42.883 "seek_hole": false, 00:22:42.883 "seek_data": false, 00:22:42.883 "copy": true, 00:22:42.883 "nvme_iov_md": false 00:22:42.883 }, 00:22:42.883 "memory_domains": [ 00:22:42.883 { 00:22:42.883 "dma_device_id": "system", 00:22:42.883 "dma_device_type": 1 00:22:42.883 } 00:22:42.883 ], 00:22:42.883 "driver_specific": { 00:22:42.883 "nvme": [ 00:22:42.883 { 00:22:42.883 "trid": { 00:22:42.883 "trtype": "TCP", 00:22:42.883 "adrfam": "IPv4", 00:22:42.883 "traddr": "10.0.0.2", 00:22:42.883 "trsvcid": "4420", 00:22:42.883 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:42.883 }, 00:22:42.883 "ctrlr_data": { 00:22:42.883 "cntlid": 2, 00:22:42.883 "vendor_id": "0x8086", 00:22:42.883 "model_number": "SPDK bdev Controller", 00:22:42.883 "serial_number": "00000000000000000000", 00:22:42.883 "firmware_revision": "24.09", 00:22:42.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:42.883 "oacs": { 00:22:42.883 "security": 0, 00:22:42.883 "format": 0, 00:22:42.883 "firmware": 0, 00:22:42.883 "ns_manage": 0 00:22:42.883 }, 00:22:42.883 "multi_ctrlr": true, 00:22:42.883 "ana_reporting": false 00:22:42.883 }, 00:22:42.883 "vs": { 00:22:42.883 "nvme_version": "1.3" 00:22:42.883 }, 00:22:42.883 "ns_data": { 00:22:42.883 "id": 1, 00:22:42.883 "can_share": true 00:22:42.883 } 00:22:42.883 } 00:22:42.883 ], 00:22:42.883 "mp_policy": "active_passive" 00:22:42.883 } 00:22:42.883 } 00:22:42.883 ] 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.bDa1E10Od2 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.bDa1E10Od2 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.883 [2024-07-12 19:14:45.330678] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:42.883 [2024-07-12 19:14:45.330784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bDa1E10Od2 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.883 [2024-07-12 19:14:45.338693] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bDa1E10Od2 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.883 [2024-07-12 19:14:45.350741] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.883 [2024-07-12 19:14:45.350775] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:42.883 nvme0n1 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:42.883 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.884 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.884 [ 00:22:42.884 { 00:22:42.884 "name": "nvme0n1", 00:22:42.884 "aliases": [ 00:22:42.884 "dc6717a9-0648-40eb-81c8-eb2602ab85d4" 00:22:42.884 ], 00:22:42.884 "product_name": "NVMe disk", 00:22:42.884 "block_size": 512, 00:22:42.884 "num_blocks": 2097152, 00:22:42.884 "uuid": "dc6717a9-0648-40eb-81c8-eb2602ab85d4", 00:22:42.884 "assigned_rate_limits": { 00:22:42.884 "rw_ios_per_sec": 0, 00:22:42.884 "rw_mbytes_per_sec": 0, 00:22:42.884 "r_mbytes_per_sec": 0, 00:22:42.884 "w_mbytes_per_sec": 0 00:22:42.884 }, 00:22:42.884 "claimed": false, 00:22:42.884 "zoned": false, 00:22:42.884 "supported_io_types": { 00:22:42.884 "read": true, 00:22:42.884 "write": true, 00:22:42.884 "unmap": false, 00:22:42.884 "flush": true, 00:22:42.884 "reset": true, 00:22:42.884 "nvme_admin": true, 00:22:42.884 "nvme_io": true, 00:22:42.884 "nvme_io_md": false, 00:22:42.884 "write_zeroes": true, 00:22:42.884 "zcopy": false, 00:22:42.884 "get_zone_info": false, 00:22:42.884 "zone_management": false, 00:22:42.884 "zone_append": false, 00:22:42.884 "compare": true, 00:22:42.884 "compare_and_write": true, 00:22:42.884 "abort": true, 00:22:42.884 "seek_hole": false, 00:22:42.884 "seek_data": false, 00:22:42.884 "copy": true, 00:22:42.884 "nvme_iov_md": false 00:22:42.884 }, 00:22:42.884 "memory_domains": [ 00:22:42.884 { 00:22:42.884 "dma_device_id": "system", 00:22:42.884 "dma_device_type": 1 00:22:42.884 } 00:22:42.884 ], 00:22:42.884 "driver_specific": { 00:22:42.884 "nvme": [ 00:22:42.884 { 00:22:42.884 "trid": { 00:22:42.884 "trtype": "TCP", 00:22:42.884 "adrfam": "IPv4", 00:22:42.884 "traddr": "10.0.0.2", 00:22:42.884 "trsvcid": "4421", 00:22:42.884 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:42.884 }, 00:22:42.884 "ctrlr_data": { 00:22:42.884 "cntlid": 3, 00:22:42.884 "vendor_id": "0x8086", 00:22:42.884 "model_number": "SPDK bdev Controller", 00:22:42.884 "serial_number": "00000000000000000000", 00:22:42.884 "firmware_revision": "24.09", 00:22:42.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:42.884 "oacs": { 00:22:42.884 "security": 0, 00:22:42.884 "format": 0, 00:22:42.884 "firmware": 0, 00:22:42.884 "ns_manage": 0 00:22:42.884 }, 00:22:42.884 "multi_ctrlr": true, 00:22:42.884 "ana_reporting": false 00:22:42.884 }, 00:22:42.884 "vs": { 00:22:42.884 "nvme_version": "1.3" 00:22:42.884 }, 00:22:42.884 "ns_data": { 00:22:42.884 "id": 1, 00:22:42.884 "can_share": true 00:22:42.884 } 00:22:42.884 } 00:22:42.884 ], 00:22:42.884 "mp_policy": "active_passive" 00:22:42.884 } 00:22:42.884 } 00:22:42.884 ] 00:22:42.884 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.884 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.884 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.884 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.142 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.142 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.bDa1E10Od2 00:22:43.142 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:43.142 19:14:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:43.142 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:43.142 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:43.142 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:43.142 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:43.143 rmmod nvme_tcp 00:22:43.143 rmmod nvme_fabrics 00:22:43.143 rmmod nvme_keyring 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 381721 ']' 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 381721 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 381721 ']' 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 381721 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 381721 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 381721' 00:22:43.143 killing process with pid 381721 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 381721 00:22:43.143 [2024-07-12 19:14:45.561262] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:43.143 [2024-07-12 19:14:45.561286] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:43.143 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 381721 00:22:43.402 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:43.402 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:43.402 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:43.402 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.402 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:43.402 19:14:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.402 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.402 19:14:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.303 19:14:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:45.303 00:22:45.303 real 0m9.709s 00:22:45.303 user 0m3.647s 00:22:45.303 sys 0m4.611s 00:22:45.303 19:14:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:45.303 19:14:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:45.303 ************************************ 00:22:45.303 END TEST nvmf_async_init 00:22:45.303 ************************************ 00:22:45.303 19:14:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:45.303 19:14:47 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:45.303 19:14:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:45.303 19:14:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:45.303 19:14:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.618 ************************************ 00:22:45.618 START TEST dma 00:22:45.618 ************************************ 00:22:45.618 19:14:47 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:45.618 * Looking for test storage... 00:22:45.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:45.618 19:14:47 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.618 19:14:47 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.618 19:14:47 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.618 19:14:47 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.618 19:14:47 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.618 19:14:47 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.618 19:14:47 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.618 19:14:47 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:45.618 19:14:47 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.618 19:14:47 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.618 19:14:47 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:45.618 19:14:47 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:45.618 00:22:45.618 real 0m0.121s 00:22:45.618 user 0m0.059s 00:22:45.618 sys 0m0.071s 00:22:45.618 19:14:47 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:45.618 19:14:47 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:45.618 ************************************ 00:22:45.618 END TEST dma 00:22:45.618 ************************************ 00:22:45.619 19:14:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:45.619 19:14:48 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:45.619 19:14:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:45.619 19:14:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:45.619 19:14:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.619 ************************************ 00:22:45.619 START TEST nvmf_identify 00:22:45.619 ************************************ 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:45.619 * Looking for test storage... 00:22:45.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.619 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:45.877 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:45.877 19:14:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:45.877 19:14:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:51.152 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:51.152 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:51.152 Found net devices under 0000:86:00.0: cvl_0_0 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:51.152 Found net devices under 0000:86:00.1: cvl_0_1 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:51.152 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:22:51.410 00:22:51.410 --- 10.0.0.2 ping statistics --- 00:22:51.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.410 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:22:51.410 00:22:51.410 --- 10.0.0.1 ping statistics --- 00:22:51.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.410 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=385522 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 385522 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 385522 ']' 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.410 19:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.410 [2024-07-12 19:14:53.877054] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:22:51.410 [2024-07-12 19:14:53.877094] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.410 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.410 [2024-07-12 19:14:53.945468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.668 [2024-07-12 19:14:54.020555] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.668 [2024-07-12 19:14:54.020597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.668 [2024-07-12 19:14:54.020604] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.668 [2024-07-12 19:14:54.020612] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.668 [2024-07-12 19:14:54.020617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.668 [2024-07-12 19:14:54.020694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.668 [2024-07-12 19:14:54.020735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.668 [2024-07-12 19:14:54.020827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.668 [2024-07-12 19:14:54.020828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:52.234 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.234 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:52.234 19:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.234 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.234 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.234 [2024-07-12 19:14:54.680954] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.235 Malloc0 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.235 [2024-07-12 19:14:54.764706] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.235 [ 00:22:52.235 { 00:22:52.235 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:52.235 "subtype": "Discovery", 00:22:52.235 "listen_addresses": [ 00:22:52.235 { 00:22:52.235 "trtype": "TCP", 00:22:52.235 "adrfam": "IPv4", 00:22:52.235 "traddr": "10.0.0.2", 00:22:52.235 "trsvcid": "4420" 00:22:52.235 } 00:22:52.235 ], 00:22:52.235 "allow_any_host": true, 00:22:52.235 "hosts": [] 00:22:52.235 }, 00:22:52.235 { 00:22:52.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.235 "subtype": "NVMe", 00:22:52.235 "listen_addresses": [ 00:22:52.235 { 00:22:52.235 "trtype": "TCP", 00:22:52.235 "adrfam": "IPv4", 00:22:52.235 "traddr": "10.0.0.2", 00:22:52.235 "trsvcid": "4420" 00:22:52.235 } 00:22:52.235 ], 00:22:52.235 "allow_any_host": true, 00:22:52.235 "hosts": [], 00:22:52.235 "serial_number": "SPDK00000000000001", 00:22:52.235 "model_number": "SPDK bdev Controller", 00:22:52.235 "max_namespaces": 32, 00:22:52.235 "min_cntlid": 1, 00:22:52.235 "max_cntlid": 65519, 00:22:52.235 "namespaces": [ 00:22:52.235 { 00:22:52.235 "nsid": 1, 00:22:52.235 "bdev_name": "Malloc0", 00:22:52.235 "name": "Malloc0", 00:22:52.235 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:52.235 "eui64": "ABCDEF0123456789", 00:22:52.235 "uuid": "6f09fdaf-a254-4ef3-ac2c-c27322f47022" 00:22:52.235 } 00:22:52.235 ] 00:22:52.235 } 00:22:52.235 ] 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.235 19:14:54 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:52.498 [2024-07-12 19:14:54.816059] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:22:52.498 [2024-07-12 19:14:54.816092] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385569 ] 00:22:52.498 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.498 [2024-07-12 19:14:54.845772] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:52.498 [2024-07-12 19:14:54.845818] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:52.498 [2024-07-12 19:14:54.845822] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:52.498 [2024-07-12 19:14:54.845835] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:52.498 [2024-07-12 19:14:54.845841] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:52.498 [2024-07-12 19:14:54.846096] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:52.498 [2024-07-12 19:14:54.846123] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e07ec0 0 00:22:52.498 [2024-07-12 19:14:54.860235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:52.498 [2024-07-12 19:14:54.860247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:52.498 [2024-07-12 19:14:54.860251] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:52.498 [2024-07-12 19:14:54.860255] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:52.498 [2024-07-12 19:14:54.860290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.860296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.860299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e07ec0) 00:22:52.498 [2024-07-12 19:14:54.860313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:52.498 [2024-07-12 19:14:54.860328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ae40, cid 0, qid 0 00:22:52.498 [2024-07-12 19:14:54.868236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.498 [2024-07-12 19:14:54.868245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.498 [2024-07-12 19:14:54.868248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8ae40) on tqpair=0x1e07ec0 00:22:52.498 [2024-07-12 19:14:54.868262] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:52.498 [2024-07-12 19:14:54.868268] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:52.498 [2024-07-12 19:14:54.868273] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:52.498 [2024-07-12 19:14:54.868287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e07ec0) 00:22:52.498 [2024-07-12 19:14:54.868305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.498 [2024-07-12 19:14:54.868317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ae40, cid 0, qid 0 00:22:52.498 [2024-07-12 19:14:54.868459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.498 [2024-07-12 19:14:54.868466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.498 [2024-07-12 19:14:54.868469] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8ae40) on tqpair=0x1e07ec0 00:22:52.498 [2024-07-12 19:14:54.868477] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:52.498 [2024-07-12 19:14:54.868484] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:52.498 [2024-07-12 19:14:54.868490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e07ec0) 00:22:52.498 [2024-07-12 19:14:54.868503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.498 [2024-07-12 19:14:54.868513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ae40, cid 0, qid 0 00:22:52.498 [2024-07-12 19:14:54.868579] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.498 [2024-07-12 19:14:54.868585] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.498 [2024-07-12 19:14:54.868588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8ae40) on tqpair=0x1e07ec0 00:22:52.498 [2024-07-12 19:14:54.868596] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:52.498 [2024-07-12 19:14:54.868603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:52.498 [2024-07-12 19:14:54.868609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868613] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868616] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e07ec0) 00:22:52.498 [2024-07-12 19:14:54.868622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.498 [2024-07-12 19:14:54.868630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ae40, cid 0, qid 0 00:22:52.498 [2024-07-12 19:14:54.868688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.498 [2024-07-12 19:14:54.868695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.498 [2024-07-12 19:14:54.868698] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8ae40) on tqpair=0x1e07ec0 00:22:52.498 [2024-07-12 19:14:54.868706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:52.498 [2024-07-12 19:14:54.868714] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868720] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e07ec0) 00:22:52.498 [2024-07-12 19:14:54.868726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.498 [2024-07-12 19:14:54.868737] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ae40, cid 0, qid 0 00:22:52.498 [2024-07-12 19:14:54.868798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.498 [2024-07-12 19:14:54.868804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.498 [2024-07-12 19:14:54.868808] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8ae40) on tqpair=0x1e07ec0 00:22:52.498 [2024-07-12 19:14:54.868816] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:52.498 [2024-07-12 19:14:54.868820] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:52.498 [2024-07-12 19:14:54.868826] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:52.498 [2024-07-12 19:14:54.868931] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:52.498 [2024-07-12 19:14:54.868936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:52.498 [2024-07-12 19:14:54.868945] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868948] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.868951] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e07ec0) 00:22:52.498 [2024-07-12 19:14:54.868957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.498 [2024-07-12 19:14:54.868966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ae40, cid 0, qid 0 00:22:52.498 [2024-07-12 19:14:54.869028] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.498 [2024-07-12 19:14:54.869034] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.498 [2024-07-12 19:14:54.869037] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.869041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8ae40) on tqpair=0x1e07ec0 00:22:52.498 [2024-07-12 19:14:54.869045] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:52.498 [2024-07-12 19:14:54.869052] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.869056] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.869059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e07ec0) 00:22:52.498 [2024-07-12 19:14:54.869065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.498 [2024-07-12 19:14:54.869074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ae40, cid 0, qid 0 00:22:52.498 [2024-07-12 19:14:54.869141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.498 [2024-07-12 19:14:54.869147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.498 [2024-07-12 19:14:54.869150] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.498 [2024-07-12 19:14:54.869154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8ae40) on tqpair=0x1e07ec0 00:22:52.498 [2024-07-12 19:14:54.869158] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:52.498 [2024-07-12 19:14:54.869162] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:52.498 [2024-07-12 19:14:54.869168] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:52.498 [2024-07-12 19:14:54.869178] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:52.499 [2024-07-12 19:14:54.869186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e07ec0) 00:22:52.499 [2024-07-12 19:14:54.869196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.499 [2024-07-12 19:14:54.869205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ae40, cid 0, qid 0 00:22:52.499 [2024-07-12 19:14:54.869300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.499 [2024-07-12 19:14:54.869307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.499 [2024-07-12 19:14:54.869310] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869314] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e07ec0): datao=0, datal=4096, cccid=0 00:22:52.499 [2024-07-12 19:14:54.869318] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8ae40) on tqpair(0x1e07ec0): expected_datao=0, payload_size=4096 00:22:52.499 [2024-07-12 19:14:54.869322] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869340] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869345] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.499 [2024-07-12 19:14:54.869386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.499 [2024-07-12 19:14:54.869389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869393] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8ae40) on tqpair=0x1e07ec0 00:22:52.499 [2024-07-12 19:14:54.869400] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:52.499 [2024-07-12 19:14:54.869407] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:52.499 [2024-07-12 19:14:54.869411] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:52.499 [2024-07-12 19:14:54.869415] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:52.499 [2024-07-12 19:14:54.869419] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:52.499 [2024-07-12 19:14:54.869423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:52.499 [2024-07-12 19:14:54.869430] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:52.499 [2024-07-12 19:14:54.869436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e07ec0) 00:22:52.499 [2024-07-12 19:14:54.869450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.499 [2024-07-12 19:14:54.869459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ae40, cid 0, qid 0 00:22:52.499 [2024-07-12 19:14:54.869539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.499 [2024-07-12 19:14:54.869545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.499 [2024-07-12 19:14:54.869548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8ae40) on tqpair=0x1e07ec0 00:22:52.499 [2024-07-12 19:14:54.869560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e07ec0) 00:22:52.499 [2024-07-12 19:14:54.869572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.499 [2024-07-12 19:14:54.869577] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e07ec0) 00:22:52.499 [2024-07-12 19:14:54.869589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.499 [2024-07-12 19:14:54.869594] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e07ec0) 00:22:52.499 [2024-07-12 19:14:54.869605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.499 [2024-07-12 19:14:54.869610] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869613] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869616] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.499 [2024-07-12 19:14:54.869621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.499 [2024-07-12 19:14:54.869625] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:52.499 [2024-07-12 19:14:54.869636] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:52.499 [2024-07-12 19:14:54.869641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e07ec0) 00:22:52.499 [2024-07-12 19:14:54.869651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.499 [2024-07-12 19:14:54.869661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ae40, cid 0, qid 0 00:22:52.499 [2024-07-12 19:14:54.869666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8afc0, cid 1, qid 0 00:22:52.499 [2024-07-12 19:14:54.869670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b140, cid 2, qid 0 00:22:52.499 [2024-07-12 19:14:54.869674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.499 [2024-07-12 19:14:54.869678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b440, cid 4, qid 0 00:22:52.499 [2024-07-12 19:14:54.869787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.499 [2024-07-12 19:14:54.869793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.499 [2024-07-12 19:14:54.869797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869800] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b440) on tqpair=0x1e07ec0 00:22:52.499 [2024-07-12 19:14:54.869804] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:52.499 [2024-07-12 19:14:54.869809] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:52.499 [2024-07-12 19:14:54.869820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e07ec0) 00:22:52.499 [2024-07-12 19:14:54.869829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.499 [2024-07-12 19:14:54.869839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b440, cid 4, qid 0 00:22:52.499 [2024-07-12 19:14:54.869911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.499 [2024-07-12 19:14:54.869917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.499 [2024-07-12 19:14:54.869920] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869924] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e07ec0): datao=0, datal=4096, cccid=4 00:22:52.499 [2024-07-12 19:14:54.869928] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8b440) on tqpair(0x1e07ec0): expected_datao=0, payload_size=4096 00:22:52.499 [2024-07-12 19:14:54.869932] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869944] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.869948] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.910340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.499 [2024-07-12 19:14:54.910352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.499 [2024-07-12 19:14:54.910355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.910359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b440) on tqpair=0x1e07ec0 00:22:52.499 [2024-07-12 19:14:54.910372] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:52.499 [2024-07-12 19:14:54.910396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.910400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e07ec0) 00:22:52.499 [2024-07-12 19:14:54.910407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.499 [2024-07-12 19:14:54.910413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.910416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.910419] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e07ec0) 00:22:52.499 [2024-07-12 19:14:54.910425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.499 [2024-07-12 19:14:54.910439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b440, cid 4, qid 0 00:22:52.499 [2024-07-12 19:14:54.910444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b5c0, cid 5, qid 0 00:22:52.499 [2024-07-12 19:14:54.910558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.499 [2024-07-12 19:14:54.910564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.499 [2024-07-12 19:14:54.910567] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.910570] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e07ec0): datao=0, datal=1024, cccid=4 00:22:52.499 [2024-07-12 19:14:54.910574] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8b440) on tqpair(0x1e07ec0): expected_datao=0, payload_size=1024 00:22:52.499 [2024-07-12 19:14:54.910578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.910584] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.910587] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.910592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.499 [2024-07-12 19:14:54.910597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.499 [2024-07-12 19:14:54.910602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.910606] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b5c0) on tqpair=0x1e07ec0 00:22:52.499 [2024-07-12 19:14:54.954235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.499 [2024-07-12 19:14:54.954245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.499 [2024-07-12 19:14:54.954248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.499 [2024-07-12 19:14:54.954252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b440) on tqpair=0x1e07ec0 00:22:52.499 [2024-07-12 19:14:54.954268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.500 [2024-07-12 19:14:54.954272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e07ec0) 00:22:52.500 [2024-07-12 19:14:54.954278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.500 [2024-07-12 19:14:54.954294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b440, cid 4, qid 0 00:22:52.500 [2024-07-12 19:14:54.954430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.500 [2024-07-12 19:14:54.954436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.500 [2024-07-12 19:14:54.954439] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.500 [2024-07-12 19:14:54.954442] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e07ec0): datao=0, datal=3072, cccid=4 00:22:52.500 [2024-07-12 19:14:54.954447] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8b440) on tqpair(0x1e07ec0): expected_datao=0, payload_size=3072 00:22:52.500 [2024-07-12 19:14:54.954451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.500 [2024-07-12 19:14:54.954467] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.500 [2024-07-12 19:14:54.954471] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.500 [2024-07-12 19:14:54.954513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.500 [2024-07-12 19:14:54.954519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.500 [2024-07-12 19:14:54.954522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.500 [2024-07-12 19:14:54.954525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b440) on tqpair=0x1e07ec0 00:22:52.500 [2024-07-12 19:14:54.954532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.500 [2024-07-12 19:14:54.954536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e07ec0) 00:22:52.500 [2024-07-12 19:14:54.954541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.500 [2024-07-12 19:14:54.954553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b440, cid 4, qid 0 00:22:52.500 [2024-07-12 19:14:54.954625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.500 [2024-07-12 19:14:54.954631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.500 [2024-07-12 19:14:54.954634] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.500 [2024-07-12 19:14:54.954637] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e07ec0): datao=0, datal=8, cccid=4 00:22:52.500 [2024-07-12 19:14:54.954641] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8b440) on tqpair(0x1e07ec0): expected_datao=0, payload_size=8 00:22:52.500 [2024-07-12 19:14:54.954644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.500 [2024-07-12 19:14:54.954649] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.500 [2024-07-12 19:14:54.954653] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.500 [2024-07-12 19:14:54.995292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.500 [2024-07-12 19:14:54.995302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.500 [2024-07-12 19:14:54.995305] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.500 [2024-07-12 19:14:54.995311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b440) on tqpair=0x1e07ec0 00:22:52.500 ===================================================== 00:22:52.500 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:52.500 ===================================================== 00:22:52.500 Controller Capabilities/Features 00:22:52.500 ================================ 00:22:52.500 Vendor ID: 0000 00:22:52.500 Subsystem Vendor ID: 0000 00:22:52.500 Serial Number: .................... 00:22:52.500 Model Number: ........................................ 00:22:52.500 Firmware Version: 24.09 00:22:52.500 Recommended Arb Burst: 0 00:22:52.500 IEEE OUI Identifier: 00 00 00 00:22:52.500 Multi-path I/O 00:22:52.500 May have multiple subsystem ports: No 00:22:52.500 May have multiple controllers: No 00:22:52.500 Associated with SR-IOV VF: No 00:22:52.500 Max Data Transfer Size: 131072 00:22:52.500 Max Number of Namespaces: 0 00:22:52.500 Max Number of I/O Queues: 1024 00:22:52.500 NVMe Specification Version (VS): 1.3 00:22:52.500 NVMe Specification Version (Identify): 1.3 00:22:52.500 Maximum Queue Entries: 128 00:22:52.500 Contiguous Queues Required: Yes 00:22:52.500 Arbitration Mechanisms Supported 00:22:52.500 Weighted Round Robin: Not Supported 00:22:52.500 Vendor Specific: Not Supported 00:22:52.500 Reset Timeout: 15000 ms 00:22:52.500 Doorbell Stride: 4 bytes 00:22:52.500 NVM Subsystem Reset: Not Supported 00:22:52.500 Command Sets Supported 00:22:52.500 NVM Command Set: Supported 00:22:52.500 Boot Partition: Not Supported 00:22:52.500 Memory Page Size Minimum: 4096 bytes 00:22:52.500 Memory Page Size Maximum: 4096 bytes 00:22:52.500 Persistent Memory Region: Not Supported 00:22:52.500 Optional Asynchronous Events Supported 00:22:52.500 Namespace Attribute Notices: Not Supported 00:22:52.500 Firmware Activation Notices: Not Supported 00:22:52.500 ANA Change Notices: Not Supported 00:22:52.500 PLE Aggregate Log Change Notices: Not Supported 00:22:52.500 LBA Status Info Alert Notices: Not Supported 00:22:52.500 EGE Aggregate Log Change Notices: Not Supported 00:22:52.500 Normal NVM Subsystem Shutdown event: Not Supported 00:22:52.500 Zone Descriptor Change Notices: Not Supported 00:22:52.500 Discovery Log Change Notices: Supported 00:22:52.500 Controller Attributes 00:22:52.500 128-bit Host Identifier: Not Supported 00:22:52.500 Non-Operational Permissive Mode: Not Supported 00:22:52.500 NVM Sets: Not Supported 00:22:52.500 Read Recovery Levels: Not Supported 00:22:52.500 Endurance Groups: Not Supported 00:22:52.500 Predictable Latency Mode: Not Supported 00:22:52.500 Traffic Based Keep ALive: Not Supported 00:22:52.500 Namespace Granularity: Not Supported 00:22:52.500 SQ Associations: Not Supported 00:22:52.500 UUID List: Not Supported 00:22:52.500 Multi-Domain Subsystem: Not Supported 00:22:52.500 Fixed Capacity Management: Not Supported 00:22:52.500 Variable Capacity Management: Not Supported 00:22:52.500 Delete Endurance Group: Not Supported 00:22:52.500 Delete NVM Set: Not Supported 00:22:52.500 Extended LBA Formats Supported: Not Supported 00:22:52.500 Flexible Data Placement Supported: Not Supported 00:22:52.500 00:22:52.500 Controller Memory Buffer Support 00:22:52.500 ================================ 00:22:52.500 Supported: No 00:22:52.500 00:22:52.500 Persistent Memory Region Support 00:22:52.500 ================================ 00:22:52.500 Supported: No 00:22:52.500 00:22:52.500 Admin Command Set Attributes 00:22:52.500 ============================ 00:22:52.500 Security Send/Receive: Not Supported 00:22:52.500 Format NVM: Not Supported 00:22:52.500 Firmware Activate/Download: Not Supported 00:22:52.500 Namespace Management: Not Supported 00:22:52.500 Device Self-Test: Not Supported 00:22:52.500 Directives: Not Supported 00:22:52.500 NVMe-MI: Not Supported 00:22:52.500 Virtualization Management: Not Supported 00:22:52.500 Doorbell Buffer Config: Not Supported 00:22:52.500 Get LBA Status Capability: Not Supported 00:22:52.500 Command & Feature Lockdown Capability: Not Supported 00:22:52.500 Abort Command Limit: 1 00:22:52.500 Async Event Request Limit: 4 00:22:52.500 Number of Firmware Slots: N/A 00:22:52.500 Firmware Slot 1 Read-Only: N/A 00:22:52.500 Firmware Activation Without Reset: N/A 00:22:52.500 Multiple Update Detection Support: N/A 00:22:52.500 Firmware Update Granularity: No Information Provided 00:22:52.500 Per-Namespace SMART Log: No 00:22:52.500 Asymmetric Namespace Access Log Page: Not Supported 00:22:52.500 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:52.500 Command Effects Log Page: Not Supported 00:22:52.500 Get Log Page Extended Data: Supported 00:22:52.500 Telemetry Log Pages: Not Supported 00:22:52.500 Persistent Event Log Pages: Not Supported 00:22:52.500 Supported Log Pages Log Page: May Support 00:22:52.500 Commands Supported & Effects Log Page: Not Supported 00:22:52.500 Feature Identifiers & Effects Log Page:May Support 00:22:52.500 NVMe-MI Commands & Effects Log Page: May Support 00:22:52.500 Data Area 4 for Telemetry Log: Not Supported 00:22:52.500 Error Log Page Entries Supported: 128 00:22:52.500 Keep Alive: Not Supported 00:22:52.500 00:22:52.500 NVM Command Set Attributes 00:22:52.500 ========================== 00:22:52.500 Submission Queue Entry Size 00:22:52.500 Max: 1 00:22:52.500 Min: 1 00:22:52.500 Completion Queue Entry Size 00:22:52.500 Max: 1 00:22:52.500 Min: 1 00:22:52.500 Number of Namespaces: 0 00:22:52.500 Compare Command: Not Supported 00:22:52.500 Write Uncorrectable Command: Not Supported 00:22:52.500 Dataset Management Command: Not Supported 00:22:52.500 Write Zeroes Command: Not Supported 00:22:52.500 Set Features Save Field: Not Supported 00:22:52.500 Reservations: Not Supported 00:22:52.500 Timestamp: Not Supported 00:22:52.500 Copy: Not Supported 00:22:52.500 Volatile Write Cache: Not Present 00:22:52.500 Atomic Write Unit (Normal): 1 00:22:52.500 Atomic Write Unit (PFail): 1 00:22:52.500 Atomic Compare & Write Unit: 1 00:22:52.500 Fused Compare & Write: Supported 00:22:52.500 Scatter-Gather List 00:22:52.500 SGL Command Set: Supported 00:22:52.500 SGL Keyed: Supported 00:22:52.500 SGL Bit Bucket Descriptor: Not Supported 00:22:52.500 SGL Metadata Pointer: Not Supported 00:22:52.500 Oversized SGL: Not Supported 00:22:52.500 SGL Metadata Address: Not Supported 00:22:52.500 SGL Offset: Supported 00:22:52.500 Transport SGL Data Block: Not Supported 00:22:52.500 Replay Protected Memory Block: Not Supported 00:22:52.500 00:22:52.500 Firmware Slot Information 00:22:52.500 ========================= 00:22:52.500 Active slot: 0 00:22:52.500 00:22:52.500 00:22:52.500 Error Log 00:22:52.500 ========= 00:22:52.500 00:22:52.500 Active Namespaces 00:22:52.500 ================= 00:22:52.500 Discovery Log Page 00:22:52.501 ================== 00:22:52.501 Generation Counter: 2 00:22:52.501 Number of Records: 2 00:22:52.501 Record Format: 0 00:22:52.501 00:22:52.501 Discovery Log Entry 0 00:22:52.501 ---------------------- 00:22:52.501 Transport Type: 3 (TCP) 00:22:52.501 Address Family: 1 (IPv4) 00:22:52.501 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:52.501 Entry Flags: 00:22:52.501 Duplicate Returned Information: 1 00:22:52.501 Explicit Persistent Connection Support for Discovery: 1 00:22:52.501 Transport Requirements: 00:22:52.501 Secure Channel: Not Required 00:22:52.501 Port ID: 0 (0x0000) 00:22:52.501 Controller ID: 65535 (0xffff) 00:22:52.501 Admin Max SQ Size: 128 00:22:52.501 Transport Service Identifier: 4420 00:22:52.501 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:52.501 Transport Address: 10.0.0.2 00:22:52.501 Discovery Log Entry 1 00:22:52.501 ---------------------- 00:22:52.501 Transport Type: 3 (TCP) 00:22:52.501 Address Family: 1 (IPv4) 00:22:52.501 Subsystem Type: 2 (NVM Subsystem) 00:22:52.501 Entry Flags: 00:22:52.501 Duplicate Returned Information: 0 00:22:52.501 Explicit Persistent Connection Support for Discovery: 0 00:22:52.501 Transport Requirements: 00:22:52.501 Secure Channel: Not Required 00:22:52.501 Port ID: 0 (0x0000) 00:22:52.501 Controller ID: 65535 (0xffff) 00:22:52.501 Admin Max SQ Size: 128 00:22:52.501 Transport Service Identifier: 4420 00:22:52.501 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:52.501 Transport Address: 10.0.0.2 [2024-07-12 19:14:54.995390] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:52.501 [2024-07-12 19:14:54.995401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8ae40) on tqpair=0x1e07ec0 00:22:52.501 [2024-07-12 19:14:54.995407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.501 [2024-07-12 19:14:54.995411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8afc0) on tqpair=0x1e07ec0 00:22:52.501 [2024-07-12 19:14:54.995416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.501 [2024-07-12 19:14:54.995420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b140) on tqpair=0x1e07ec0 00:22:52.501 [2024-07-12 19:14:54.995424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.501 [2024-07-12 19:14:54.995428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.501 [2024-07-12 19:14:54.995432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.501 [2024-07-12 19:14:54.995441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995445] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.501 [2024-07-12 19:14:54.995454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.501 [2024-07-12 19:14:54.995467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.501 [2024-07-12 19:14:54.995528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.501 [2024-07-12 19:14:54.995534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.501 [2024-07-12 19:14:54.995537] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995540] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.501 [2024-07-12 19:14:54.995546] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995550] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.501 [2024-07-12 19:14:54.995559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.501 [2024-07-12 19:14:54.995570] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.501 [2024-07-12 19:14:54.995640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.501 [2024-07-12 19:14:54.995646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.501 [2024-07-12 19:14:54.995649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.501 [2024-07-12 19:14:54.995657] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:52.501 [2024-07-12 19:14:54.995661] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:52.501 [2024-07-12 19:14:54.995669] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995673] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995676] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.501 [2024-07-12 19:14:54.995682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.501 [2024-07-12 19:14:54.995693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.501 [2024-07-12 19:14:54.995754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.501 [2024-07-12 19:14:54.995759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.501 [2024-07-12 19:14:54.995763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995766] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.501 [2024-07-12 19:14:54.995775] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995778] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995781] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.501 [2024-07-12 19:14:54.995787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.501 [2024-07-12 19:14:54.995796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.501 [2024-07-12 19:14:54.995875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.501 [2024-07-12 19:14:54.995881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.501 [2024-07-12 19:14:54.995884] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.501 [2024-07-12 19:14:54.995896] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.501 [2024-07-12 19:14:54.995908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.501 [2024-07-12 19:14:54.995917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.501 [2024-07-12 19:14:54.995986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.501 [2024-07-12 19:14:54.995991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.501 [2024-07-12 19:14:54.995994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.995997] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.501 [2024-07-12 19:14:54.996006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.996010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.996013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.501 [2024-07-12 19:14:54.996018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.501 [2024-07-12 19:14:54.996027] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.501 [2024-07-12 19:14:54.996088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.501 [2024-07-12 19:14:54.996093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.501 [2024-07-12 19:14:54.996096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.996099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.501 [2024-07-12 19:14:54.996107] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.996111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.996114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.501 [2024-07-12 19:14:54.996119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.501 [2024-07-12 19:14:54.996130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.501 [2024-07-12 19:14:54.996192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.501 [2024-07-12 19:14:54.996198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.501 [2024-07-12 19:14:54.996201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.996204] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.501 [2024-07-12 19:14:54.996212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.996215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.996218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.501 [2024-07-12 19:14:54.996230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.501 [2024-07-12 19:14:54.996240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.501 [2024-07-12 19:14:54.996308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.501 [2024-07-12 19:14:54.996314] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.501 [2024-07-12 19:14:54.996317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.996320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.501 [2024-07-12 19:14:54.996328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.996332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.501 [2024-07-12 19:14:54.996335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.501 [2024-07-12 19:14:54.996340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.501 [2024-07-12 19:14:54.996349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.501 [2024-07-12 19:14:54.996420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.502 [2024-07-12 19:14:54.996425] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.502 [2024-07-12 19:14:54.996428] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.502 [2024-07-12 19:14:54.996440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.502 [2024-07-12 19:14:54.996452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.502 [2024-07-12 19:14:54.996462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.502 [2024-07-12 19:14:54.996523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.502 [2024-07-12 19:14:54.996528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.502 [2024-07-12 19:14:54.996531] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.502 [2024-07-12 19:14:54.996542] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.502 [2024-07-12 19:14:54.996554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.502 [2024-07-12 19:14:54.996563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.502 [2024-07-12 19:14:54.996622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.502 [2024-07-12 19:14:54.996628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.502 [2024-07-12 19:14:54.996631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.502 [2024-07-12 19:14:54.996642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996646] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.502 [2024-07-12 19:14:54.996654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.502 [2024-07-12 19:14:54.996664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.502 [2024-07-12 19:14:54.996733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.502 [2024-07-12 19:14:54.996739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.502 [2024-07-12 19:14:54.996742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.502 [2024-07-12 19:14:54.996754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.502 [2024-07-12 19:14:54.996766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.502 [2024-07-12 19:14:54.996776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.502 [2024-07-12 19:14:54.996838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.502 [2024-07-12 19:14:54.996843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.502 [2024-07-12 19:14:54.996846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996849] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.502 [2024-07-12 19:14:54.996858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996862] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996865] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.502 [2024-07-12 19:14:54.996871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.502 [2024-07-12 19:14:54.996880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.502 [2024-07-12 19:14:54.996945] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.502 [2024-07-12 19:14:54.996950] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.502 [2024-07-12 19:14:54.996953] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.502 [2024-07-12 19:14:54.996965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.996972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.502 [2024-07-12 19:14:54.996977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.502 [2024-07-12 19:14:54.996987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.502 [2024-07-12 19:14:54.997049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.502 [2024-07-12 19:14:54.997059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.502 [2024-07-12 19:14:54.997062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.997065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.502 [2024-07-12 19:14:54.997073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.997076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.997079] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.502 [2024-07-12 19:14:54.997085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.502 [2024-07-12 19:14:54.997094] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.502 [2024-07-12 19:14:54.997156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.502 [2024-07-12 19:14:54.997161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.502 [2024-07-12 19:14:54.997165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.997168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.502 [2024-07-12 19:14:54.997176] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.997180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:54.997183] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.502 [2024-07-12 19:14:54.997189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.502 [2024-07-12 19:14:54.997198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.502 [2024-07-12 19:14:55.001234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.502 [2024-07-12 19:14:55.001242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.502 [2024-07-12 19:14:55.001245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:55.001249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.502 [2024-07-12 19:14:55.001258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:55.001262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:55.001265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e07ec0) 00:22:52.502 [2024-07-12 19:14:55.001271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.502 [2024-07-12 19:14:55.001282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8b2c0, cid 3, qid 0 00:22:52.502 [2024-07-12 19:14:55.001410] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.502 [2024-07-12 19:14:55.001415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.502 [2024-07-12 19:14:55.001419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.502 [2024-07-12 19:14:55.001422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8b2c0) on tqpair=0x1e07ec0 00:22:52.502 [2024-07-12 19:14:55.001428] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:22:52.502 00:22:52.502 19:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:52.502 [2024-07-12 19:14:55.038158] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:22:52.502 [2024-07-12 19:14:55.038199] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385644 ] 00:22:52.502 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.766 [2024-07-12 19:14:55.066472] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:52.766 [2024-07-12 19:14:55.066515] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:52.766 [2024-07-12 19:14:55.066520] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:52.766 [2024-07-12 19:14:55.066533] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:52.766 [2024-07-12 19:14:55.066539] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:52.766 [2024-07-12 19:14:55.066754] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:52.766 [2024-07-12 19:14:55.066778] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2271ec0 0 00:22:52.766 [2024-07-12 19:14:55.081234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:52.766 [2024-07-12 19:14:55.081246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:52.766 [2024-07-12 19:14:55.081249] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:52.766 [2024-07-12 19:14:55.081252] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:52.766 [2024-07-12 19:14:55.081279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.766 [2024-07-12 19:14:55.081284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.766 [2024-07-12 19:14:55.081288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2271ec0) 00:22:52.766 [2024-07-12 19:14:55.081298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:52.766 [2024-07-12 19:14:55.081313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4e40, cid 0, qid 0 00:22:52.766 [2024-07-12 19:14:55.089236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.766 [2024-07-12 19:14:55.089246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.766 [2024-07-12 19:14:55.089249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.766 [2024-07-12 19:14:55.089253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4e40) on tqpair=0x2271ec0 00:22:52.766 [2024-07-12 19:14:55.089264] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:52.766 [2024-07-12 19:14:55.089270] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:52.766 [2024-07-12 19:14:55.089275] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:52.766 [2024-07-12 19:14:55.089285] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.766 [2024-07-12 19:14:55.089289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.766 [2024-07-12 19:14:55.089292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2271ec0) 00:22:52.766 [2024-07-12 19:14:55.089299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.766 [2024-07-12 19:14:55.089310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4e40, cid 0, qid 0 00:22:52.766 [2024-07-12 19:14:55.089470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.766 [2024-07-12 19:14:55.089476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.766 [2024-07-12 19:14:55.089479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.766 [2024-07-12 19:14:55.089483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4e40) on tqpair=0x2271ec0 00:22:52.766 [2024-07-12 19:14:55.089487] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:52.766 [2024-07-12 19:14:55.089495] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:52.766 [2024-07-12 19:14:55.089502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.766 [2024-07-12 19:14:55.089505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.766 [2024-07-12 19:14:55.089508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2271ec0) 00:22:52.766 [2024-07-12 19:14:55.089514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.766 [2024-07-12 19:14:55.089524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4e40, cid 0, qid 0 00:22:52.766 [2024-07-12 19:14:55.089592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.766 [2024-07-12 19:14:55.089599] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.766 [2024-07-12 19:14:55.089602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.766 [2024-07-12 19:14:55.089605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4e40) on tqpair=0x2271ec0 00:22:52.766 [2024-07-12 19:14:55.089609] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:52.766 [2024-07-12 19:14:55.089615] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:52.766 [2024-07-12 19:14:55.089621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.089624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.089627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2271ec0) 00:22:52.767 [2024-07-12 19:14:55.089633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.767 [2024-07-12 19:14:55.089642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4e40, cid 0, qid 0 00:22:52.767 [2024-07-12 19:14:55.089702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.767 [2024-07-12 19:14:55.089709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.767 [2024-07-12 19:14:55.089712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.089715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4e40) on tqpair=0x2271ec0 00:22:52.767 [2024-07-12 19:14:55.089719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:52.767 [2024-07-12 19:14:55.089727] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.089731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.089734] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2271ec0) 00:22:52.767 [2024-07-12 19:14:55.089740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.767 [2024-07-12 19:14:55.089749] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4e40, cid 0, qid 0 00:22:52.767 [2024-07-12 19:14:55.089827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.767 [2024-07-12 19:14:55.089833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.767 [2024-07-12 19:14:55.089836] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.089839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4e40) on tqpair=0x2271ec0 00:22:52.767 [2024-07-12 19:14:55.089842] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:52.767 [2024-07-12 19:14:55.089847] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:52.767 [2024-07-12 19:14:55.089853] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:52.767 [2024-07-12 19:14:55.089959] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:52.767 [2024-07-12 19:14:55.089964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:52.767 [2024-07-12 19:14:55.089971] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.089974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.089977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2271ec0) 00:22:52.767 [2024-07-12 19:14:55.089983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.767 [2024-07-12 19:14:55.089992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4e40, cid 0, qid 0 00:22:52.767 [2024-07-12 19:14:55.090051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.767 [2024-07-12 19:14:55.090057] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.767 [2024-07-12 19:14:55.090060] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090063] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4e40) on tqpair=0x2271ec0 00:22:52.767 [2024-07-12 19:14:55.090067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:52.767 [2024-07-12 19:14:55.090075] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2271ec0) 00:22:52.767 [2024-07-12 19:14:55.090087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.767 [2024-07-12 19:14:55.090096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4e40, cid 0, qid 0 00:22:52.767 [2024-07-12 19:14:55.090170] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.767 [2024-07-12 19:14:55.090176] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.767 [2024-07-12 19:14:55.090179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090183] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4e40) on tqpair=0x2271ec0 00:22:52.767 [2024-07-12 19:14:55.090186] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:52.767 [2024-07-12 19:14:55.090191] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:52.767 [2024-07-12 19:14:55.090197] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:52.767 [2024-07-12 19:14:55.090204] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:52.767 [2024-07-12 19:14:55.090211] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090214] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2271ec0) 00:22:52.767 [2024-07-12 19:14:55.090220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.767 [2024-07-12 19:14:55.090236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4e40, cid 0, qid 0 00:22:52.767 [2024-07-12 19:14:55.090335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.767 [2024-07-12 19:14:55.090341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.767 [2024-07-12 19:14:55.090344] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090347] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2271ec0): datao=0, datal=4096, cccid=0 00:22:52.767 [2024-07-12 19:14:55.090353] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f4e40) on tqpair(0x2271ec0): expected_datao=0, payload_size=4096 00:22:52.767 [2024-07-12 19:14:55.090357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090374] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090378] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090412] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.767 [2024-07-12 19:14:55.090418] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.767 [2024-07-12 19:14:55.090421] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4e40) on tqpair=0x2271ec0 00:22:52.767 [2024-07-12 19:14:55.090430] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:52.767 [2024-07-12 19:14:55.090436] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:52.767 [2024-07-12 19:14:55.090440] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:52.767 [2024-07-12 19:14:55.090444] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:52.767 [2024-07-12 19:14:55.090448] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:52.767 [2024-07-12 19:14:55.090452] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:52.767 [2024-07-12 19:14:55.090458] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:52.767 [2024-07-12 19:14:55.090464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090471] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2271ec0) 00:22:52.767 [2024-07-12 19:14:55.090477] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.767 [2024-07-12 19:14:55.090487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4e40, cid 0, qid 0 00:22:52.767 [2024-07-12 19:14:55.090553] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.767 [2024-07-12 19:14:55.090559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.767 [2024-07-12 19:14:55.090562] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4e40) on tqpair=0x2271ec0 00:22:52.767 [2024-07-12 19:14:55.090570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2271ec0) 00:22:52.767 [2024-07-12 19:14:55.090582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.767 [2024-07-12 19:14:55.090587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2271ec0) 00:22:52.767 [2024-07-12 19:14:55.090598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.767 [2024-07-12 19:14:55.090603] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2271ec0) 00:22:52.767 [2024-07-12 19:14:55.090615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.767 [2024-07-12 19:14:55.090621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.767 [2024-07-12 19:14:55.090631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.767 [2024-07-12 19:14:55.090635] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:52.767 [2024-07-12 19:14:55.090645] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:52.767 [2024-07-12 19:14:55.090651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.767 [2024-07-12 19:14:55.090654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2271ec0) 00:22:52.768 [2024-07-12 19:14:55.090660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.768 [2024-07-12 19:14:55.090670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4e40, cid 0, qid 0 00:22:52.768 [2024-07-12 19:14:55.090675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4fc0, cid 1, qid 0 00:22:52.768 [2024-07-12 19:14:55.090678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5140, cid 2, qid 0 00:22:52.768 [2024-07-12 19:14:55.090682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.768 [2024-07-12 19:14:55.090686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5440, cid 4, qid 0 00:22:52.768 [2024-07-12 19:14:55.090781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.768 [2024-07-12 19:14:55.090787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.768 [2024-07-12 19:14:55.090790] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.090793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5440) on tqpair=0x2271ec0 00:22:52.768 [2024-07-12 19:14:55.090797] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:52.768 [2024-07-12 19:14:55.090801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.090808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.090814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.090819] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.090822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.090825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2271ec0) 00:22:52.768 [2024-07-12 19:14:55.090831] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.768 [2024-07-12 19:14:55.090840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5440, cid 4, qid 0 00:22:52.768 [2024-07-12 19:14:55.090906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.768 [2024-07-12 19:14:55.090912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.768 [2024-07-12 19:14:55.090915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.090920] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5440) on tqpair=0x2271ec0 00:22:52.768 [2024-07-12 19:14:55.090970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.090979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.090985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.090989] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2271ec0) 00:22:52.768 [2024-07-12 19:14:55.090994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.768 [2024-07-12 19:14:55.091003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5440, cid 4, qid 0 00:22:52.768 [2024-07-12 19:14:55.091078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.768 [2024-07-12 19:14:55.091084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.768 [2024-07-12 19:14:55.091087] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.091090] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2271ec0): datao=0, datal=4096, cccid=4 00:22:52.768 [2024-07-12 19:14:55.091094] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f5440) on tqpair(0x2271ec0): expected_datao=0, payload_size=4096 00:22:52.768 [2024-07-12 19:14:55.091097] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.091103] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.091106] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.091116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.768 [2024-07-12 19:14:55.091121] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.768 [2024-07-12 19:14:55.091124] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.091127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5440) on tqpair=0x2271ec0 00:22:52.768 [2024-07-12 19:14:55.091135] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:52.768 [2024-07-12 19:14:55.091147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.091155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.091161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.091164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2271ec0) 00:22:52.768 [2024-07-12 19:14:55.091170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.768 [2024-07-12 19:14:55.091180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5440, cid 4, qid 0 00:22:52.768 [2024-07-12 19:14:55.091266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.768 [2024-07-12 19:14:55.091273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.768 [2024-07-12 19:14:55.091276] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.091279] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2271ec0): datao=0, datal=4096, cccid=4 00:22:52.768 [2024-07-12 19:14:55.091282] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f5440) on tqpair(0x2271ec0): expected_datao=0, payload_size=4096 00:22:52.768 [2024-07-12 19:14:55.091286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.091296] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.091300] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.132358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.768 [2024-07-12 19:14:55.132369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.768 [2024-07-12 19:14:55.132372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.132375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5440) on tqpair=0x2271ec0 00:22:52.768 [2024-07-12 19:14:55.132388] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.132397] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.132404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.132408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2271ec0) 00:22:52.768 [2024-07-12 19:14:55.132414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.768 [2024-07-12 19:14:55.132426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5440, cid 4, qid 0 00:22:52.768 [2024-07-12 19:14:55.132496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.768 [2024-07-12 19:14:55.132502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.768 [2024-07-12 19:14:55.132505] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.132508] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2271ec0): datao=0, datal=4096, cccid=4 00:22:52.768 [2024-07-12 19:14:55.132511] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f5440) on tqpair(0x2271ec0): expected_datao=0, payload_size=4096 00:22:52.768 [2024-07-12 19:14:55.132515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.132528] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.132532] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.177235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.768 [2024-07-12 19:14:55.177245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.768 [2024-07-12 19:14:55.177248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.177251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5440) on tqpair=0x2271ec0 00:22:52.768 [2024-07-12 19:14:55.177259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.177267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.177277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.177283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.177287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.177292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.177296] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:52.768 [2024-07-12 19:14:55.177300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:52.768 [2024-07-12 19:14:55.177305] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:52.768 [2024-07-12 19:14:55.177317] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.177323] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2271ec0) 00:22:52.768 [2024-07-12 19:14:55.177330] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.768 [2024-07-12 19:14:55.177335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.177339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.177342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2271ec0) 00:22:52.768 [2024-07-12 19:14:55.177347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.768 [2024-07-12 19:14:55.177360] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5440, cid 4, qid 0 00:22:52.768 [2024-07-12 19:14:55.177365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f55c0, cid 5, qid 0 00:22:52.768 [2024-07-12 19:14:55.177448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.768 [2024-07-12 19:14:55.177454] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.768 [2024-07-12 19:14:55.177457] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.768 [2024-07-12 19:14:55.177461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5440) on tqpair=0x2271ec0 00:22:52.768 [2024-07-12 19:14:55.177466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.768 [2024-07-12 19:14:55.177471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.768 [2024-07-12 19:14:55.177474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.177477] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f55c0) on tqpair=0x2271ec0 00:22:52.769 [2024-07-12 19:14:55.177485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.177488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2271ec0) 00:22:52.769 [2024-07-12 19:14:55.177494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.769 [2024-07-12 19:14:55.177503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f55c0, cid 5, qid 0 00:22:52.769 [2024-07-12 19:14:55.177569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.769 [2024-07-12 19:14:55.177575] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.769 [2024-07-12 19:14:55.177578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.177581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f55c0) on tqpair=0x2271ec0 00:22:52.769 [2024-07-12 19:14:55.177588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.177592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2271ec0) 00:22:52.769 [2024-07-12 19:14:55.177597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.769 [2024-07-12 19:14:55.177606] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f55c0, cid 5, qid 0 00:22:52.769 [2024-07-12 19:14:55.177668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.769 [2024-07-12 19:14:55.177673] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.769 [2024-07-12 19:14:55.177676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.177679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f55c0) on tqpair=0x2271ec0 00:22:52.769 [2024-07-12 19:14:55.177687] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.177690] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2271ec0) 00:22:52.769 [2024-07-12 19:14:55.177696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.769 [2024-07-12 19:14:55.177719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f55c0, cid 5, qid 0 00:22:52.769 [2024-07-12 19:14:55.177787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.769 [2024-07-12 19:14:55.177793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.769 [2024-07-12 19:14:55.177796] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.177799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f55c0) on tqpair=0x2271ec0 00:22:52.769 [2024-07-12 19:14:55.177811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.177815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2271ec0) 00:22:52.769 [2024-07-12 19:14:55.177821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.769 [2024-07-12 19:14:55.177827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.177831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2271ec0) 00:22:52.769 [2024-07-12 19:14:55.177836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.769 [2024-07-12 19:14:55.177842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.177846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2271ec0) 00:22:52.769 [2024-07-12 19:14:55.177851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.769 [2024-07-12 19:14:55.177857] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.177860] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2271ec0) 00:22:52.769 [2024-07-12 19:14:55.177865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.769 [2024-07-12 19:14:55.177875] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f55c0, cid 5, qid 0 00:22:52.769 [2024-07-12 19:14:55.177880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5440, cid 4, qid 0 00:22:52.769 [2024-07-12 19:14:55.177884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5740, cid 6, qid 0 00:22:52.769 [2024-07-12 19:14:55.177888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f58c0, cid 7, qid 0 00:22:52.769 [2024-07-12 19:14:55.178025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.769 [2024-07-12 19:14:55.178031] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.769 [2024-07-12 19:14:55.178034] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178038] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2271ec0): datao=0, datal=8192, cccid=5 00:22:52.769 [2024-07-12 19:14:55.178041] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f55c0) on tqpair(0x2271ec0): expected_datao=0, payload_size=8192 00:22:52.769 [2024-07-12 19:14:55.178045] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178070] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178073] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.769 [2024-07-12 19:14:55.178083] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.769 [2024-07-12 19:14:55.178085] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178089] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2271ec0): datao=0, datal=512, cccid=4 00:22:52.769 [2024-07-12 19:14:55.178092] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f5440) on tqpair(0x2271ec0): expected_datao=0, payload_size=512 00:22:52.769 [2024-07-12 19:14:55.178098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178103] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178106] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.769 [2024-07-12 19:14:55.178115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.769 [2024-07-12 19:14:55.178118] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178121] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2271ec0): datao=0, datal=512, cccid=6 00:22:52.769 [2024-07-12 19:14:55.178125] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f5740) on tqpair(0x2271ec0): expected_datao=0, payload_size=512 00:22:52.769 [2024-07-12 19:14:55.178128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178134] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178137] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.769 [2024-07-12 19:14:55.178146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.769 [2024-07-12 19:14:55.178149] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178152] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2271ec0): datao=0, datal=4096, cccid=7 00:22:52.769 [2024-07-12 19:14:55.178155] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f58c0) on tqpair(0x2271ec0): expected_datao=0, payload_size=4096 00:22:52.769 [2024-07-12 19:14:55.178159] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178165] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178167] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178175] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.769 [2024-07-12 19:14:55.178179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.769 [2024-07-12 19:14:55.178182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f55c0) on tqpair=0x2271ec0 00:22:52.769 [2024-07-12 19:14:55.178195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.769 [2024-07-12 19:14:55.178200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.769 [2024-07-12 19:14:55.178203] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5440) on tqpair=0x2271ec0 00:22:52.769 [2024-07-12 19:14:55.178215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.769 [2024-07-12 19:14:55.178220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.769 [2024-07-12 19:14:55.178223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5740) on tqpair=0x2271ec0 00:22:52.769 [2024-07-12 19:14:55.178239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.769 [2024-07-12 19:14:55.178244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.769 [2024-07-12 19:14:55.178247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.769 [2024-07-12 19:14:55.178250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f58c0) on tqpair=0x2271ec0 00:22:52.769 ===================================================== 00:22:52.769 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:52.769 ===================================================== 00:22:52.769 Controller Capabilities/Features 00:22:52.769 ================================ 00:22:52.769 Vendor ID: 8086 00:22:52.769 Subsystem Vendor ID: 8086 00:22:52.769 Serial Number: SPDK00000000000001 00:22:52.769 Model Number: SPDK bdev Controller 00:22:52.769 Firmware Version: 24.09 00:22:52.769 Recommended Arb Burst: 6 00:22:52.769 IEEE OUI Identifier: e4 d2 5c 00:22:52.769 Multi-path I/O 00:22:52.769 May have multiple subsystem ports: Yes 00:22:52.769 May have multiple controllers: Yes 00:22:52.769 Associated with SR-IOV VF: No 00:22:52.769 Max Data Transfer Size: 131072 00:22:52.769 Max Number of Namespaces: 32 00:22:52.769 Max Number of I/O Queues: 127 00:22:52.769 NVMe Specification Version (VS): 1.3 00:22:52.769 NVMe Specification Version (Identify): 1.3 00:22:52.769 Maximum Queue Entries: 128 00:22:52.769 Contiguous Queues Required: Yes 00:22:52.769 Arbitration Mechanisms Supported 00:22:52.769 Weighted Round Robin: Not Supported 00:22:52.769 Vendor Specific: Not Supported 00:22:52.769 Reset Timeout: 15000 ms 00:22:52.769 Doorbell Stride: 4 bytes 00:22:52.769 NVM Subsystem Reset: Not Supported 00:22:52.769 Command Sets Supported 00:22:52.769 NVM Command Set: Supported 00:22:52.769 Boot Partition: Not Supported 00:22:52.769 Memory Page Size Minimum: 4096 bytes 00:22:52.770 Memory Page Size Maximum: 4096 bytes 00:22:52.770 Persistent Memory Region: Not Supported 00:22:52.770 Optional Asynchronous Events Supported 00:22:52.770 Namespace Attribute Notices: Supported 00:22:52.770 Firmware Activation Notices: Not Supported 00:22:52.770 ANA Change Notices: Not Supported 00:22:52.770 PLE Aggregate Log Change Notices: Not Supported 00:22:52.770 LBA Status Info Alert Notices: Not Supported 00:22:52.770 EGE Aggregate Log Change Notices: Not Supported 00:22:52.770 Normal NVM Subsystem Shutdown event: Not Supported 00:22:52.770 Zone Descriptor Change Notices: Not Supported 00:22:52.770 Discovery Log Change Notices: Not Supported 00:22:52.770 Controller Attributes 00:22:52.770 128-bit Host Identifier: Supported 00:22:52.770 Non-Operational Permissive Mode: Not Supported 00:22:52.770 NVM Sets: Not Supported 00:22:52.770 Read Recovery Levels: Not Supported 00:22:52.770 Endurance Groups: Not Supported 00:22:52.770 Predictable Latency Mode: Not Supported 00:22:52.770 Traffic Based Keep ALive: Not Supported 00:22:52.770 Namespace Granularity: Not Supported 00:22:52.770 SQ Associations: Not Supported 00:22:52.770 UUID List: Not Supported 00:22:52.770 Multi-Domain Subsystem: Not Supported 00:22:52.770 Fixed Capacity Management: Not Supported 00:22:52.770 Variable Capacity Management: Not Supported 00:22:52.770 Delete Endurance Group: Not Supported 00:22:52.770 Delete NVM Set: Not Supported 00:22:52.770 Extended LBA Formats Supported: Not Supported 00:22:52.770 Flexible Data Placement Supported: Not Supported 00:22:52.770 00:22:52.770 Controller Memory Buffer Support 00:22:52.770 ================================ 00:22:52.770 Supported: No 00:22:52.770 00:22:52.770 Persistent Memory Region Support 00:22:52.770 ================================ 00:22:52.770 Supported: No 00:22:52.770 00:22:52.770 Admin Command Set Attributes 00:22:52.770 ============================ 00:22:52.770 Security Send/Receive: Not Supported 00:22:52.770 Format NVM: Not Supported 00:22:52.770 Firmware Activate/Download: Not Supported 00:22:52.770 Namespace Management: Not Supported 00:22:52.770 Device Self-Test: Not Supported 00:22:52.770 Directives: Not Supported 00:22:52.770 NVMe-MI: Not Supported 00:22:52.770 Virtualization Management: Not Supported 00:22:52.770 Doorbell Buffer Config: Not Supported 00:22:52.770 Get LBA Status Capability: Not Supported 00:22:52.770 Command & Feature Lockdown Capability: Not Supported 00:22:52.770 Abort Command Limit: 4 00:22:52.770 Async Event Request Limit: 4 00:22:52.770 Number of Firmware Slots: N/A 00:22:52.770 Firmware Slot 1 Read-Only: N/A 00:22:52.770 Firmware Activation Without Reset: N/A 00:22:52.770 Multiple Update Detection Support: N/A 00:22:52.770 Firmware Update Granularity: No Information Provided 00:22:52.770 Per-Namespace SMART Log: No 00:22:52.770 Asymmetric Namespace Access Log Page: Not Supported 00:22:52.770 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:52.770 Command Effects Log Page: Supported 00:22:52.770 Get Log Page Extended Data: Supported 00:22:52.770 Telemetry Log Pages: Not Supported 00:22:52.770 Persistent Event Log Pages: Not Supported 00:22:52.770 Supported Log Pages Log Page: May Support 00:22:52.770 Commands Supported & Effects Log Page: Not Supported 00:22:52.770 Feature Identifiers & Effects Log Page:May Support 00:22:52.770 NVMe-MI Commands & Effects Log Page: May Support 00:22:52.770 Data Area 4 for Telemetry Log: Not Supported 00:22:52.770 Error Log Page Entries Supported: 128 00:22:52.770 Keep Alive: Supported 00:22:52.770 Keep Alive Granularity: 10000 ms 00:22:52.770 00:22:52.770 NVM Command Set Attributes 00:22:52.770 ========================== 00:22:52.770 Submission Queue Entry Size 00:22:52.770 Max: 64 00:22:52.770 Min: 64 00:22:52.770 Completion Queue Entry Size 00:22:52.770 Max: 16 00:22:52.770 Min: 16 00:22:52.770 Number of Namespaces: 32 00:22:52.770 Compare Command: Supported 00:22:52.770 Write Uncorrectable Command: Not Supported 00:22:52.770 Dataset Management Command: Supported 00:22:52.770 Write Zeroes Command: Supported 00:22:52.770 Set Features Save Field: Not Supported 00:22:52.770 Reservations: Supported 00:22:52.770 Timestamp: Not Supported 00:22:52.770 Copy: Supported 00:22:52.770 Volatile Write Cache: Present 00:22:52.770 Atomic Write Unit (Normal): 1 00:22:52.770 Atomic Write Unit (PFail): 1 00:22:52.770 Atomic Compare & Write Unit: 1 00:22:52.770 Fused Compare & Write: Supported 00:22:52.770 Scatter-Gather List 00:22:52.770 SGL Command Set: Supported 00:22:52.770 SGL Keyed: Supported 00:22:52.770 SGL Bit Bucket Descriptor: Not Supported 00:22:52.770 SGL Metadata Pointer: Not Supported 00:22:52.770 Oversized SGL: Not Supported 00:22:52.770 SGL Metadata Address: Not Supported 00:22:52.770 SGL Offset: Supported 00:22:52.770 Transport SGL Data Block: Not Supported 00:22:52.770 Replay Protected Memory Block: Not Supported 00:22:52.770 00:22:52.770 Firmware Slot Information 00:22:52.770 ========================= 00:22:52.770 Active slot: 1 00:22:52.770 Slot 1 Firmware Revision: 24.09 00:22:52.770 00:22:52.770 00:22:52.770 Commands Supported and Effects 00:22:52.770 ============================== 00:22:52.770 Admin Commands 00:22:52.770 -------------- 00:22:52.770 Get Log Page (02h): Supported 00:22:52.770 Identify (06h): Supported 00:22:52.770 Abort (08h): Supported 00:22:52.770 Set Features (09h): Supported 00:22:52.770 Get Features (0Ah): Supported 00:22:52.770 Asynchronous Event Request (0Ch): Supported 00:22:52.770 Keep Alive (18h): Supported 00:22:52.770 I/O Commands 00:22:52.770 ------------ 00:22:52.770 Flush (00h): Supported LBA-Change 00:22:52.770 Write (01h): Supported LBA-Change 00:22:52.770 Read (02h): Supported 00:22:52.770 Compare (05h): Supported 00:22:52.770 Write Zeroes (08h): Supported LBA-Change 00:22:52.770 Dataset Management (09h): Supported LBA-Change 00:22:52.770 Copy (19h): Supported LBA-Change 00:22:52.770 00:22:52.770 Error Log 00:22:52.770 ========= 00:22:52.770 00:22:52.770 Arbitration 00:22:52.770 =========== 00:22:52.770 Arbitration Burst: 1 00:22:52.770 00:22:52.770 Power Management 00:22:52.770 ================ 00:22:52.770 Number of Power States: 1 00:22:52.770 Current Power State: Power State #0 00:22:52.770 Power State #0: 00:22:52.770 Max Power: 0.00 W 00:22:52.770 Non-Operational State: Operational 00:22:52.770 Entry Latency: Not Reported 00:22:52.770 Exit Latency: Not Reported 00:22:52.770 Relative Read Throughput: 0 00:22:52.770 Relative Read Latency: 0 00:22:52.770 Relative Write Throughput: 0 00:22:52.770 Relative Write Latency: 0 00:22:52.770 Idle Power: Not Reported 00:22:52.770 Active Power: Not Reported 00:22:52.770 Non-Operational Permissive Mode: Not Supported 00:22:52.770 00:22:52.770 Health Information 00:22:52.770 ================== 00:22:52.770 Critical Warnings: 00:22:52.770 Available Spare Space: OK 00:22:52.770 Temperature: OK 00:22:52.770 Device Reliability: OK 00:22:52.770 Read Only: No 00:22:52.770 Volatile Memory Backup: OK 00:22:52.770 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:52.770 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:52.770 Available Spare: 0% 00:22:52.770 Available Spare Threshold: 0% 00:22:52.770 Life Percentage Used:[2024-07-12 19:14:55.178332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.770 [2024-07-12 19:14:55.178337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2271ec0) 00:22:52.770 [2024-07-12 19:14:55.178343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.770 [2024-07-12 19:14:55.178355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f58c0, cid 7, qid 0 00:22:52.770 [2024-07-12 19:14:55.178438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.770 [2024-07-12 19:14:55.178444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.770 [2024-07-12 19:14:55.178446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.770 [2024-07-12 19:14:55.178450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f58c0) on tqpair=0x2271ec0 00:22:52.770 [2024-07-12 19:14:55.178479] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:52.770 [2024-07-12 19:14:55.178488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4e40) on tqpair=0x2271ec0 00:22:52.770 [2024-07-12 19:14:55.178494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.770 [2024-07-12 19:14:55.178499] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4fc0) on tqpair=0x2271ec0 00:22:52.770 [2024-07-12 19:14:55.178503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.770 [2024-07-12 19:14:55.178507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5140) on tqpair=0x2271ec0 00:22:52.770 [2024-07-12 19:14:55.178511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.770 [2024-07-12 19:14:55.178515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.770 [2024-07-12 19:14:55.178519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.771 [2024-07-12 19:14:55.178525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.178529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.178532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.771 [2024-07-12 19:14:55.178538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.771 [2024-07-12 19:14:55.178549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.771 [2024-07-12 19:14:55.178611] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.771 [2024-07-12 19:14:55.178616] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.771 [2024-07-12 19:14:55.178619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.178622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.771 [2024-07-12 19:14:55.178628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.178632] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.178635] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.771 [2024-07-12 19:14:55.178640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.771 [2024-07-12 19:14:55.178652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.771 [2024-07-12 19:14:55.178728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.771 [2024-07-12 19:14:55.178734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.771 [2024-07-12 19:14:55.178737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.178740] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.771 [2024-07-12 19:14:55.178743] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:52.771 [2024-07-12 19:14:55.178747] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:52.771 [2024-07-12 19:14:55.178757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.178760] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.178764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.771 [2024-07-12 19:14:55.178769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.771 [2024-07-12 19:14:55.178778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.771 [2024-07-12 19:14:55.178846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.771 [2024-07-12 19:14:55.178852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.771 [2024-07-12 19:14:55.178854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.178857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.771 [2024-07-12 19:14:55.178866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.178869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.178872] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.771 [2024-07-12 19:14:55.178878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.771 [2024-07-12 19:14:55.178887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.771 [2024-07-12 19:14:55.178963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.771 [2024-07-12 19:14:55.178969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.771 [2024-07-12 19:14:55.178972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.178975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.771 [2024-07-12 19:14:55.178983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.178986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.178989] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.771 [2024-07-12 19:14:55.178995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.771 [2024-07-12 19:14:55.179004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.771 [2024-07-12 19:14:55.179061] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.771 [2024-07-12 19:14:55.179067] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.771 [2024-07-12 19:14:55.179070] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.771 [2024-07-12 19:14:55.179081] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.771 [2024-07-12 19:14:55.179093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.771 [2024-07-12 19:14:55.179102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.771 [2024-07-12 19:14:55.179160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.771 [2024-07-12 19:14:55.179165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.771 [2024-07-12 19:14:55.179168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.771 [2024-07-12 19:14:55.179179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.771 [2024-07-12 19:14:55.179193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.771 [2024-07-12 19:14:55.179202] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.771 [2024-07-12 19:14:55.179280] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.771 [2024-07-12 19:14:55.179286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.771 [2024-07-12 19:14:55.179289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.771 [2024-07-12 19:14:55.179300] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.771 [2024-07-12 19:14:55.179312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.771 [2024-07-12 19:14:55.179321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.771 [2024-07-12 19:14:55.179395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.771 [2024-07-12 19:14:55.179400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.771 [2024-07-12 19:14:55.179403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179406] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.771 [2024-07-12 19:14:55.179414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.771 [2024-07-12 19:14:55.179426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.771 [2024-07-12 19:14:55.179435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.771 [2024-07-12 19:14:55.179512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.771 [2024-07-12 19:14:55.179519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.771 [2024-07-12 19:14:55.179522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.771 [2024-07-12 19:14:55.179533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.771 [2024-07-12 19:14:55.179540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.772 [2024-07-12 19:14:55.179545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.772 [2024-07-12 19:14:55.179554] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.772 [2024-07-12 19:14:55.179617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.772 [2024-07-12 19:14:55.179623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.772 [2024-07-12 19:14:55.179626] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.179629] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.772 [2024-07-12 19:14:55.179638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.179642] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.179645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.772 [2024-07-12 19:14:55.179652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.772 [2024-07-12 19:14:55.179661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.772 [2024-07-12 19:14:55.179719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.772 [2024-07-12 19:14:55.179724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.772 [2024-07-12 19:14:55.179727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.179731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.772 [2024-07-12 19:14:55.179738] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.179742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.179745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.772 [2024-07-12 19:14:55.179750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.772 [2024-07-12 19:14:55.179759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.772 [2024-07-12 19:14:55.179820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.772 [2024-07-12 19:14:55.179825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.772 [2024-07-12 19:14:55.179828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.179831] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.772 [2024-07-12 19:14:55.179839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.179843] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.179846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.772 [2024-07-12 19:14:55.179851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.772 [2024-07-12 19:14:55.179860] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.772 [2024-07-12 19:14:55.179920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.772 [2024-07-12 19:14:55.179926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.772 [2024-07-12 19:14:55.179929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.179932] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.772 [2024-07-12 19:14:55.179940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.179943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.179946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.772 [2024-07-12 19:14:55.179952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.772 [2024-07-12 19:14:55.179961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.772 [2024-07-12 19:14:55.180021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.772 [2024-07-12 19:14:55.180027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.772 [2024-07-12 19:14:55.180029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.772 [2024-07-12 19:14:55.180040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.772 [2024-07-12 19:14:55.180054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.772 [2024-07-12 19:14:55.180063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.772 [2024-07-12 19:14:55.180138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.772 [2024-07-12 19:14:55.180143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.772 [2024-07-12 19:14:55.180146] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.772 [2024-07-12 19:14:55.180157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.772 [2024-07-12 19:14:55.180169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.772 [2024-07-12 19:14:55.180178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.772 [2024-07-12 19:14:55.180254] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.772 [2024-07-12 19:14:55.180260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.772 [2024-07-12 19:14:55.180262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180266] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.772 [2024-07-12 19:14:55.180274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180280] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.772 [2024-07-12 19:14:55.180286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.772 [2024-07-12 19:14:55.180295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.772 [2024-07-12 19:14:55.180370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.772 [2024-07-12 19:14:55.180376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.772 [2024-07-12 19:14:55.180379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.772 [2024-07-12 19:14:55.180390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180393] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.772 [2024-07-12 19:14:55.180402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.772 [2024-07-12 19:14:55.180411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.772 [2024-07-12 19:14:55.180482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.772 [2024-07-12 19:14:55.180487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.772 [2024-07-12 19:14:55.180490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.772 [2024-07-12 19:14:55.180502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.772 [2024-07-12 19:14:55.180514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.772 [2024-07-12 19:14:55.180525] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.772 [2024-07-12 19:14:55.180586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.772 [2024-07-12 19:14:55.180592] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.772 [2024-07-12 19:14:55.180595] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180598] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.772 [2024-07-12 19:14:55.180606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180610] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.772 [2024-07-12 19:14:55.180618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.772 [2024-07-12 19:14:55.180627] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.772 [2024-07-12 19:14:55.180703] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.772 [2024-07-12 19:14:55.180709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.772 [2024-07-12 19:14:55.180711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180714] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.772 [2024-07-12 19:14:55.180722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.772 [2024-07-12 19:14:55.180734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.772 [2024-07-12 19:14:55.180744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.772 [2024-07-12 19:14:55.180804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.772 [2024-07-12 19:14:55.180809] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.772 [2024-07-12 19:14:55.180812] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180815] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.772 [2024-07-12 19:14:55.180823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.772 [2024-07-12 19:14:55.180829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.772 [2024-07-12 19:14:55.180835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.772 [2024-07-12 19:14:55.180844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.772 [2024-07-12 19:14:55.180902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.772 [2024-07-12 19:14:55.180907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.773 [2024-07-12 19:14:55.180910] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.773 [2024-07-12 19:14:55.180913] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.773 [2024-07-12 19:14:55.180921] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.773 [2024-07-12 19:14:55.180924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.773 [2024-07-12 19:14:55.180927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.773 [2024-07-12 19:14:55.180933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.773 [2024-07-12 19:14:55.180942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.773 [2024-07-12 19:14:55.181002] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.773 [2024-07-12 19:14:55.181008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.773 [2024-07-12 19:14:55.181010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.773 [2024-07-12 19:14:55.181014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.773 [2024-07-12 19:14:55.181022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.773 [2024-07-12 19:14:55.181025] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.773 [2024-07-12 19:14:55.181028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.773 [2024-07-12 19:14:55.181034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.773 [2024-07-12 19:14:55.181043] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.773 [2024-07-12 19:14:55.181103] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.773 [2024-07-12 19:14:55.181109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.773 [2024-07-12 19:14:55.181112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.773 [2024-07-12 19:14:55.181115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.773 [2024-07-12 19:14:55.181123] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.773 [2024-07-12 19:14:55.181127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.773 [2024-07-12 19:14:55.181130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.773 [2024-07-12 19:14:55.181135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.773 [2024-07-12 19:14:55.181145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.773 [2024-07-12 19:14:55.181220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.773 [2024-07-12 19:14:55.185233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.773 [2024-07-12 19:14:55.185238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.773 [2024-07-12 19:14:55.185242] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.773 [2024-07-12 19:14:55.185251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.773 [2024-07-12 19:14:55.185255] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.773 [2024-07-12 19:14:55.185258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2271ec0) 00:22:52.773 [2024-07-12 19:14:55.185264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.773 [2024-07-12 19:14:55.185275] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f52c0, cid 3, qid 0 00:22:52.773 [2024-07-12 19:14:55.185420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.773 [2024-07-12 19:14:55.185426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.773 [2024-07-12 19:14:55.185428] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.773 [2024-07-12 19:14:55.185432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f52c0) on tqpair=0x2271ec0 00:22:52.773 [2024-07-12 19:14:55.185438] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:52.773 0% 00:22:52.773 Data Units Read: 0 00:22:52.773 Data Units Written: 0 00:22:52.773 Host Read Commands: 0 00:22:52.773 Host Write Commands: 0 00:22:52.773 Controller Busy Time: 0 minutes 00:22:52.773 Power Cycles: 0 00:22:52.773 Power On Hours: 0 hours 00:22:52.773 Unsafe Shutdowns: 0 00:22:52.773 Unrecoverable Media Errors: 0 00:22:52.773 Lifetime Error Log Entries: 0 00:22:52.773 Warning Temperature Time: 0 minutes 00:22:52.773 Critical Temperature Time: 0 minutes 00:22:52.773 00:22:52.773 Number of Queues 00:22:52.773 ================ 00:22:52.773 Number of I/O Submission Queues: 127 00:22:52.773 Number of I/O Completion Queues: 127 00:22:52.773 00:22:52.773 Active Namespaces 00:22:52.773 ================= 00:22:52.773 Namespace ID:1 00:22:52.773 Error Recovery Timeout: Unlimited 00:22:52.773 Command Set Identifier: NVM (00h) 00:22:52.773 Deallocate: Supported 00:22:52.773 Deallocated/Unwritten Error: Not Supported 00:22:52.773 Deallocated Read Value: Unknown 00:22:52.773 Deallocate in Write Zeroes: Not Supported 00:22:52.773 Deallocated Guard Field: 0xFFFF 00:22:52.773 Flush: Supported 00:22:52.773 Reservation: Supported 00:22:52.773 Namespace Sharing Capabilities: Multiple Controllers 00:22:52.773 Size (in LBAs): 131072 (0GiB) 00:22:52.773 Capacity (in LBAs): 131072 (0GiB) 00:22:52.773 Utilization (in LBAs): 131072 (0GiB) 00:22:52.773 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:52.773 EUI64: ABCDEF0123456789 00:22:52.773 UUID: 6f09fdaf-a254-4ef3-ac2c-c27322f47022 00:22:52.773 Thin Provisioning: Not Supported 00:22:52.773 Per-NS Atomic Units: Yes 00:22:52.773 Atomic Boundary Size (Normal): 0 00:22:52.773 Atomic Boundary Size (PFail): 0 00:22:52.773 Atomic Boundary Offset: 0 00:22:52.773 Maximum Single Source Range Length: 65535 00:22:52.773 Maximum Copy Length: 65535 00:22:52.773 Maximum Source Range Count: 1 00:22:52.773 NGUID/EUI64 Never Reused: No 00:22:52.773 Namespace Write Protected: No 00:22:52.773 Number of LBA Formats: 1 00:22:52.773 Current LBA Format: LBA Format #00 00:22:52.773 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:52.773 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.773 rmmod nvme_tcp 00:22:52.773 rmmod nvme_fabrics 00:22:52.773 rmmod nvme_keyring 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 385522 ']' 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 385522 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 385522 ']' 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 385522 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 385522 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 385522' 00:22:52.773 killing process with pid 385522 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 385522 00:22:52.773 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 385522 00:22:53.032 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:53.032 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:53.032 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:53.032 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.032 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.032 19:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.032 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.032 19:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.570 19:14:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:55.570 00:22:55.570 real 0m9.533s 00:22:55.570 user 0m7.679s 00:22:55.570 sys 0m4.603s 00:22:55.570 19:14:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:55.570 19:14:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:55.570 ************************************ 00:22:55.570 END TEST nvmf_identify 00:22:55.570 ************************************ 00:22:55.570 19:14:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:55.570 19:14:57 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:55.570 19:14:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:55.570 19:14:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.570 19:14:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.570 ************************************ 00:22:55.570 START TEST nvmf_perf 00:22:55.570 ************************************ 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:55.570 * Looking for test storage... 00:22:55.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.570 19:14:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:00.847 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:00.847 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:00.847 Found net devices under 0000:86:00.0: cvl_0_0 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:00.847 Found net devices under 0000:86:00.1: cvl_0_1 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.847 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:23:01.107 00:23:01.107 --- 10.0.0.2 ping statistics --- 00:23:01.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.107 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:23:01.107 00:23:01.107 --- 10.0.0.1 ping statistics --- 00:23:01.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.107 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=389222 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 389222 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 389222 ']' 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.107 19:15:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:01.107 [2024-07-12 19:15:03.582383] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:23:01.107 [2024-07-12 19:15:03.582425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.107 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.107 [2024-07-12 19:15:03.650834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:01.366 [2024-07-12 19:15:03.724611] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.366 [2024-07-12 19:15:03.724655] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.366 [2024-07-12 19:15:03.724661] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.366 [2024-07-12 19:15:03.724667] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.366 [2024-07-12 19:15:03.724671] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.366 [2024-07-12 19:15:03.724741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.366 [2024-07-12 19:15:03.724782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.366 [2024-07-12 19:15:03.724885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.366 [2024-07-12 19:15:03.724886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.933 19:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.933 19:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:23:01.933 19:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.933 19:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.933 19:15:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:01.934 19:15:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.934 19:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:01.934 19:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:05.227 19:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:05.227 19:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:05.227 19:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:05.227 19:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:05.485 19:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:05.485 19:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:05.485 19:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:05.485 19:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:05.485 19:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:05.485 [2024-07-12 19:15:08.004679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.485 19:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:05.744 19:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:05.744 19:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:06.003 19:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:06.003 19:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:06.261 19:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.261 [2024-07-12 19:15:08.759458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.261 19:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:06.520 19:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:06.520 19:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:06.520 19:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:06.520 19:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:07.896 Initializing NVMe Controllers 00:23:07.896 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:07.896 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:07.896 Initialization complete. Launching workers. 00:23:07.896 ======================================================== 00:23:07.896 Latency(us) 00:23:07.896 Device Information : IOPS MiB/s Average min max 00:23:07.896 PCIE (0000:5e:00.0) NSID 1 from core 0: 97240.22 379.84 328.48 26.42 6191.13 00:23:07.896 ======================================================== 00:23:07.896 Total : 97240.22 379.84 328.48 26.42 6191.13 00:23:07.896 00:23:07.896 19:15:10 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:07.896 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.273 Initializing NVMe Controllers 00:23:09.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:09.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:09.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:09.273 Initialization complete. Launching workers. 00:23:09.273 ======================================================== 00:23:09.273 Latency(us) 00:23:09.273 Device Information : IOPS MiB/s Average min max 00:23:09.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.00 0.31 12960.68 105.80 45690.66 00:23:09.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19699.89 7299.14 49053.95 00:23:09.273 ======================================================== 00:23:09.273 Total : 130.00 0.51 15604.52 105.80 49053.95 00:23:09.273 00:23:09.273 19:15:11 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:09.273 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.211 Initializing NVMe Controllers 00:23:10.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:10.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:10.211 Initialization complete. Launching workers. 00:23:10.211 ======================================================== 00:23:10.211 Latency(us) 00:23:10.211 Device Information : IOPS MiB/s Average min max 00:23:10.211 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11153.93 43.57 2868.21 392.48 6848.96 00:23:10.211 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3866.42 15.10 8397.85 4321.06 47841.19 00:23:10.211 ======================================================== 00:23:10.211 Total : 15020.34 58.67 4291.61 392.48 47841.19 00:23:10.211 00:23:10.211 19:15:12 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:10.211 19:15:12 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:10.211 19:15:12 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:10.471 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.007 Initializing NVMe Controllers 00:23:13.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:13.007 Controller IO queue size 128, less than required. 00:23:13.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.007 Controller IO queue size 128, less than required. 00:23:13.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:13.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:13.007 Initialization complete. Launching workers. 00:23:13.007 ======================================================== 00:23:13.007 Latency(us) 00:23:13.007 Device Information : IOPS MiB/s Average min max 00:23:13.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1943.94 485.99 66802.02 48075.02 111079.65 00:23:13.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 608.48 152.12 217999.13 60603.81 345912.78 00:23:13.007 ======================================================== 00:23:13.007 Total : 2552.43 638.11 102846.46 48075.02 345912.78 00:23:13.007 00:23:13.007 19:15:15 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:13.007 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.007 No valid NVMe controllers or AIO or URING devices found 00:23:13.007 Initializing NVMe Controllers 00:23:13.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:13.007 Controller IO queue size 128, less than required. 00:23:13.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.007 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:13.007 Controller IO queue size 128, less than required. 00:23:13.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.007 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:13.007 WARNING: Some requested NVMe devices were skipped 00:23:13.007 19:15:15 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:13.007 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.543 Initializing NVMe Controllers 00:23:15.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:15.543 Controller IO queue size 128, less than required. 00:23:15.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:15.543 Controller IO queue size 128, less than required. 00:23:15.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:15.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:15.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:15.543 Initialization complete. Launching workers. 00:23:15.543 00:23:15.543 ==================== 00:23:15.543 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:15.543 TCP transport: 00:23:15.543 polls: 17021 00:23:15.543 idle_polls: 12437 00:23:15.543 sock_completions: 4584 00:23:15.543 nvme_completions: 7127 00:23:15.543 submitted_requests: 10726 00:23:15.543 queued_requests: 1 00:23:15.543 00:23:15.543 ==================== 00:23:15.543 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:15.543 TCP transport: 00:23:15.543 polls: 13646 00:23:15.543 idle_polls: 9207 00:23:15.543 sock_completions: 4439 00:23:15.543 nvme_completions: 7003 00:23:15.543 submitted_requests: 10494 00:23:15.543 queued_requests: 1 00:23:15.543 ======================================================== 00:23:15.543 Latency(us) 00:23:15.543 Device Information : IOPS MiB/s Average min max 00:23:15.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1781.28 445.32 73438.18 43041.20 126836.09 00:23:15.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1750.29 437.57 74080.38 34702.49 119126.42 00:23:15.543 ======================================================== 00:23:15.543 Total : 3531.57 882.89 73756.47 34702.49 126836.09 00:23:15.543 00:23:15.543 19:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:15.543 19:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:15.803 rmmod nvme_tcp 00:23:15.803 rmmod nvme_fabrics 00:23:15.803 rmmod nvme_keyring 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 389222 ']' 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 389222 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 389222 ']' 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 389222 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.803 19:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 389222 00:23:16.062 19:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:16.062 19:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:16.062 19:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 389222' 00:23:16.062 killing process with pid 389222 00:23:16.062 19:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 389222 00:23:16.062 19:15:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 389222 00:23:17.443 19:15:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:17.443 19:15:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:17.443 19:15:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:17.443 19:15:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:17.443 19:15:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:17.443 19:15:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.443 19:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.443 19:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.349 19:15:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:19.608 00:23:19.608 real 0m24.246s 00:23:19.608 user 1m4.374s 00:23:19.608 sys 0m7.720s 00:23:19.608 19:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:19.608 19:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:19.608 ************************************ 00:23:19.608 END TEST nvmf_perf 00:23:19.608 ************************************ 00:23:19.608 19:15:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:19.608 19:15:21 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:19.608 19:15:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:19.608 19:15:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:19.608 19:15:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:19.608 ************************************ 00:23:19.608 START TEST nvmf_fio_host 00:23:19.608 ************************************ 00:23:19.608 19:15:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:19.608 * Looking for test storage... 00:23:19.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:19.608 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:19.609 19:15:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:26.177 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:26.177 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:26.177 Found net devices under 0000:86:00.0: cvl_0_0 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:26.177 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:26.178 Found net devices under 0000:86:00.1: cvl_0_1 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:26.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:23:26.178 00:23:26.178 --- 10.0.0.2 ping statistics --- 00:23:26.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.178 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:23:26.178 00:23:26.178 --- 10.0.0.1 ping statistics --- 00:23:26.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.178 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=395736 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 395736 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 395736 ']' 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.178 19:15:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.178 [2024-07-12 19:15:27.879266] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:23:26.178 [2024-07-12 19:15:27.879309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.178 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.178 [2024-07-12 19:15:27.950279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:26.178 [2024-07-12 19:15:28.030449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.178 [2024-07-12 19:15:28.030486] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.178 [2024-07-12 19:15:28.030494] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.178 [2024-07-12 19:15:28.030500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.178 [2024-07-12 19:15:28.030504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.178 [2024-07-12 19:15:28.030568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.178 [2024-07-12 19:15:28.030611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.178 [2024-07-12 19:15:28.030636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.178 [2024-07-12 19:15:28.030637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:26.178 19:15:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.178 19:15:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:23:26.178 19:15:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:26.437 [2024-07-12 19:15:28.847587] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.437 19:15:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:26.437 19:15:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.437 19:15:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.437 19:15:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:26.696 Malloc1 00:23:26.696 19:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:26.955 19:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:26.955 19:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:27.214 [2024-07-12 19:15:29.649784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.214 19:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:27.474 19:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:27.733 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:27.733 fio-3.35 00:23:27.733 Starting 1 thread 00:23:27.733 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.292 00:23:30.292 test: (groupid=0, jobs=1): err= 0: pid=396298: Fri Jul 12 19:15:32 2024 00:23:30.292 read: IOPS=11.8k, BW=46.0MiB/s (48.2MB/s)(92.2MiB/2005msec) 00:23:30.292 slat (nsec): min=1594, max=254846, avg=1771.53, stdev=2279.42 00:23:30.292 clat (usec): min=3110, max=10766, avg=5992.65, stdev=472.54 00:23:30.292 lat (usec): min=3144, max=10768, avg=5994.42, stdev=472.47 00:23:30.292 clat percentiles (usec): 00:23:30.292 | 1.00th=[ 4817], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604], 00:23:30.292 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6128], 00:23:30.292 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6718], 00:23:30.292 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 8029], 99.95th=[ 8848], 00:23:30.292 | 99.99th=[10290] 00:23:30.292 bw ( KiB/s): min=46040, max=47816, per=100.00%, avg=47090.00, stdev=798.60, samples=4 00:23:30.292 iops : min=11510, max=11954, avg=11772.50, stdev=199.65, samples=4 00:23:30.292 write: IOPS=11.7k, BW=45.7MiB/s (48.0MB/s)(91.7MiB/2005msec); 0 zone resets 00:23:30.292 slat (nsec): min=1652, max=226364, avg=1858.30, stdev=1663.04 00:23:30.292 clat (usec): min=2459, max=9513, avg=4866.15, stdev=396.31 00:23:30.292 lat (usec): min=2475, max=9515, avg=4868.01, stdev=396.33 00:23:30.292 clat percentiles (usec): 00:23:30.292 | 1.00th=[ 3949], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:23:30.292 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4883], 60.00th=[ 4948], 00:23:30.292 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5342], 95.00th=[ 5473], 00:23:30.292 | 99.00th=[ 5735], 99.50th=[ 6063], 99.90th=[ 8160], 99.95th=[ 8717], 00:23:30.292 | 99.99th=[ 9372] 00:23:30.292 bw ( KiB/s): min=46528, max=47360, per=99.94%, avg=46816.00, stdev=378.63, samples=4 00:23:30.292 iops : min=11632, max=11840, avg=11704.00, stdev=94.66, samples=4 00:23:30.292 lat (msec) : 4=0.63%, 10=99.35%, 20=0.01% 00:23:30.292 cpu : usr=75.60%, sys=23.35%, ctx=48, majf=0, minf=6 00:23:30.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:30.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:30.293 issued rwts: total=23600,23480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.293 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:30.293 00:23:30.293 Run status group 0 (all jobs): 00:23:30.293 READ: bw=46.0MiB/s (48.2MB/s), 46.0MiB/s-46.0MiB/s (48.2MB/s-48.2MB/s), io=92.2MiB (96.7MB), run=2005-2005msec 00:23:30.293 WRITE: bw=45.7MiB/s (48.0MB/s), 45.7MiB/s-45.7MiB/s (48.0MB/s-48.0MB/s), io=91.7MiB (96.2MB), run=2005-2005msec 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:30.293 19:15:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:30.551 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:30.551 fio-3.35 00:23:30.551 Starting 1 thread 00:23:30.551 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.070 00:23:33.070 test: (groupid=0, jobs=1): err= 0: pid=396862: Fri Jul 12 19:15:35 2024 00:23:33.070 read: IOPS=10.7k, BW=167MiB/s (175MB/s)(335MiB/2006msec) 00:23:33.070 slat (nsec): min=2571, max=88315, avg=2901.11, stdev=1466.29 00:23:33.070 clat (usec): min=1701, max=50238, avg=7053.23, stdev=3500.42 00:23:33.070 lat (usec): min=1704, max=50241, avg=7056.13, stdev=3500.46 00:23:33.070 clat percentiles (usec): 00:23:33.070 | 1.00th=[ 3589], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5342], 00:23:33.070 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6849], 60.00th=[ 7308], 00:23:33.070 | 70.00th=[ 7635], 80.00th=[ 8094], 90.00th=[ 8979], 95.00th=[ 9896], 00:23:33.070 | 99.00th=[11600], 99.50th=[44827], 99.90th=[49021], 99.95th=[49546], 00:23:33.070 | 99.99th=[50070] 00:23:33.070 bw ( KiB/s): min=76576, max=93728, per=50.39%, avg=86192.00, stdev=7196.92, samples=4 00:23:33.070 iops : min= 4786, max= 5858, avg=5387.00, stdev=449.81, samples=4 00:23:33.070 write: IOPS=6340, BW=99.1MiB/s (104MB/s)(176MiB/1781msec); 0 zone resets 00:23:33.070 slat (usec): min=30, max=381, avg=32.67, stdev= 7.86 00:23:33.070 clat (usec): min=3992, max=16139, avg=8635.26, stdev=1510.92 00:23:33.070 lat (usec): min=4023, max=16170, avg=8667.93, stdev=1512.29 00:23:33.070 clat percentiles (usec): 00:23:33.070 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7308], 00:23:33.070 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:23:33.070 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11338], 00:23:33.070 | 99.00th=[12649], 99.50th=[13173], 99.90th=[14222], 99.95th=[14484], 00:23:33.070 | 99.99th=[16057] 00:23:33.070 bw ( KiB/s): min=80480, max=97760, per=88.61%, avg=89896.00, stdev=7214.25, samples=4 00:23:33.070 iops : min= 5030, max= 6110, avg=5618.50, stdev=450.89, samples=4 00:23:33.070 lat (msec) : 2=0.03%, 4=1.95%, 10=88.78%, 20=8.86%, 50=0.38% 00:23:33.070 lat (msec) : 100=0.01% 00:23:33.070 cpu : usr=83.55%, sys=13.86%, ctx=188, majf=0, minf=3 00:23:33.070 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:33.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:33.070 issued rwts: total=21444,11293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:33.070 00:23:33.070 Run status group 0 (all jobs): 00:23:33.070 READ: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=335MiB (351MB), run=2006-2006msec 00:23:33.070 WRITE: bw=99.1MiB/s (104MB/s), 99.1MiB/s-99.1MiB/s (104MB/s-104MB/s), io=176MiB (185MB), run=1781-1781msec 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:33.070 rmmod nvme_tcp 00:23:33.070 rmmod nvme_fabrics 00:23:33.070 rmmod nvme_keyring 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 395736 ']' 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 395736 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 395736 ']' 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 395736 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 395736 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 395736' 00:23:33.070 killing process with pid 395736 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 395736 00:23:33.070 19:15:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 395736 00:23:33.329 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.329 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.329 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.329 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.329 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.329 19:15:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.329 19:15:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.329 19:15:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.233 19:15:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:35.233 00:23:35.233 real 0m15.801s 00:23:35.233 user 0m47.074s 00:23:35.233 sys 0m6.286s 00:23:35.492 19:15:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:35.492 19:15:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.492 ************************************ 00:23:35.492 END TEST nvmf_fio_host 00:23:35.492 ************************************ 00:23:35.492 19:15:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:35.492 19:15:37 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:35.492 19:15:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:35.493 19:15:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:35.493 19:15:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:35.493 ************************************ 00:23:35.493 START TEST nvmf_failover 00:23:35.493 ************************************ 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:35.493 * Looking for test storage... 00:23:35.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:35.493 19:15:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:35.493 19:15:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.493 19:15:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.493 19:15:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.493 19:15:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:35.493 19:15:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:35.493 19:15:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:35.493 19:15:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:42.066 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:42.066 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:42.066 Found net devices under 0000:86:00.0: cvl_0_0 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.066 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:42.067 Found net devices under 0000:86:00.1: cvl_0_1 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:42.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:23:42.067 00:23:42.067 --- 10.0.0.2 ping statistics --- 00:23:42.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.067 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:23:42.067 00:23:42.067 --- 10.0.0.1 ping statistics --- 00:23:42.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.067 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=400732 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 400732 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 400732 ']' 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.067 19:15:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:42.067 [2024-07-12 19:15:43.742612] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:23:42.067 [2024-07-12 19:15:43.742653] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.067 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.067 [2024-07-12 19:15:43.814460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:42.067 [2024-07-12 19:15:43.887093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.067 [2024-07-12 19:15:43.887136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.067 [2024-07-12 19:15:43.887143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.067 [2024-07-12 19:15:43.887149] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.067 [2024-07-12 19:15:43.887154] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.067 [2024-07-12 19:15:43.887257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.067 [2024-07-12 19:15:43.887350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.067 [2024-07-12 19:15:43.887351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.067 19:15:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.067 19:15:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:42.067 19:15:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:42.067 19:15:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:42.067 19:15:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:42.067 19:15:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.067 19:15:44 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:42.324 [2024-07-12 19:15:44.740854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.324 19:15:44 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:42.581 Malloc0 00:23:42.582 19:15:44 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:42.838 19:15:45 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:42.838 19:15:45 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.095 [2024-07-12 19:15:45.520572] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.095 19:15:45 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:43.352 [2024-07-12 19:15:45.693052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:43.352 19:15:45 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:43.352 [2024-07-12 19:15:45.869644] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:43.353 19:15:45 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=401091 00:23:43.353 19:15:45 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:43.353 19:15:45 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:43.353 19:15:45 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 401091 /var/tmp/bdevperf.sock 00:23:43.353 19:15:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 401091 ']' 00:23:43.353 19:15:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.353 19:15:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.353 19:15:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.353 19:15:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.353 19:15:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:44.284 19:15:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.284 19:15:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:44.284 19:15:46 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:44.541 NVMe0n1 00:23:44.541 19:15:47 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:44.799 00:23:44.799 19:15:47 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=401325 00:23:44.799 19:15:47 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:44.799 19:15:47 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:45.730 19:15:48 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:45.988 [2024-07-12 19:15:48.446094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397080 is same with the state(5) to be set 00:23:45.988 [2024-07-12 19:15:48.446153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397080 is same with the state(5) to be set 00:23:45.988 [2024-07-12 19:15:48.446163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397080 is same with the state(5) to be set 00:23:45.988 [2024-07-12 19:15:48.446171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397080 is same with the state(5) to be set 00:23:45.988 [2024-07-12 19:15:48.446179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397080 is same with the state(5) to be set 00:23:45.988 [2024-07-12 19:15:48.446187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397080 is same with the state(5) to be set 00:23:45.988 [2024-07-12 19:15:48.446194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397080 is same with the state(5) to be set 00:23:45.988 [2024-07-12 19:15:48.446201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397080 is same with the state(5) to be set 00:23:45.988 [2024-07-12 19:15:48.446209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397080 is same with the state(5) to be set 00:23:45.988 19:15:48 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:49.262 19:15:51 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:49.262 00:23:49.262 19:15:51 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:49.519 [2024-07-12 19:15:51.980443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397f20 is same with the state(5) to be set 00:23:49.519 [2024-07-12 19:15:51.980482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397f20 is same with the state(5) to be set 00:23:49.519 [2024-07-12 19:15:51.980490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397f20 is same with the state(5) to be set 00:23:49.519 [2024-07-12 19:15:51.980497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397f20 is same with the state(5) to be set 00:23:49.519 [2024-07-12 19:15:51.980503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397f20 is same with the state(5) to be set 00:23:49.519 [2024-07-12 19:15:51.980510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397f20 is same with the state(5) to be set 00:23:49.519 [2024-07-12 19:15:51.980517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1397f20 is same with the state(5) to be set 00:23:49.519 19:15:52 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:52.793 19:15:55 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.793 [2024-07-12 19:15:55.188714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.793 19:15:55 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:53.724 19:15:56 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:53.981 [2024-07-12 19:15:56.398300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 [2024-07-12 19:15:56.398516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1398aa0 is same with the state(5) to be set 00:23:53.981 19:15:56 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 401325 00:24:00.527 0 00:24:00.527 19:16:02 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 401091 00:24:00.527 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 401091 ']' 00:24:00.527 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 401091 00:24:00.527 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:00.527 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.527 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 401091 00:24:00.527 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:00.527 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:00.527 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 401091' 00:24:00.527 killing process with pid 401091 00:24:00.527 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 401091 00:24:00.527 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 401091 00:24:00.527 19:16:02 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:00.527 [2024-07-12 19:15:45.943272] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:24:00.527 [2024-07-12 19:15:45.943323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401091 ] 00:24:00.527 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.527 [2024-07-12 19:15:46.009162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.527 [2024-07-12 19:15:46.083159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.527 Running I/O for 15 seconds... 00:24:00.527 [2024-07-12 19:15:48.446565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.527 [2024-07-12 19:15:48.446599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.527 [2024-07-12 19:15:48.446616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.527 [2024-07-12 19:15:48.446623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.527 [2024-07-12 19:15:48.446632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.527 [2024-07-12 19:15:48.446639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.527 [2024-07-12 19:15:48.446648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.527 [2024-07-12 19:15:48.446654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.527 [2024-07-12 19:15:48.446663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.527 [2024-07-12 19:15:48.446670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.446989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.446995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.447009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.528 [2024-07-12 19:15:48.447024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.528 [2024-07-12 19:15:48.447279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.528 [2024-07-12 19:15:48.447286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.529 [2024-07-12 19:15:48.447893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.529 [2024-07-12 19:15:48.447901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.447907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.447915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.447921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.447929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.447935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.447943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.447949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.447957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.447964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.447971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.447977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.447985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.447993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.448007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.448021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.448035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.448051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.448065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.448079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.448093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.448107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.448121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.448135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.448149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.448163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.530 [2024-07-12 19:15:48.448177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.530 [2024-07-12 19:15:48.448208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98544 len:8 PRP1 0x0 PRP2 0x0 00:24:00.530 [2024-07-12 19:15:48.448214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.530 [2024-07-12 19:15:48.448234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.530 [2024-07-12 19:15:48.448241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98552 len:8 PRP1 0x0 PRP2 0x0 00:24:00.530 [2024-07-12 19:15:48.448247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.530 [2024-07-12 19:15:48.448264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.530 [2024-07-12 19:15:48.448270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98560 len:8 PRP1 0x0 PRP2 0x0 00:24:00.530 [2024-07-12 19:15:48.448276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.530 [2024-07-12 19:15:48.448288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.530 [2024-07-12 19:15:48.448293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98568 len:8 PRP1 0x0 PRP2 0x0 00:24:00.530 [2024-07-12 19:15:48.448299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.530 [2024-07-12 19:15:48.448310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.530 [2024-07-12 19:15:48.448315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98576 len:8 PRP1 0x0 PRP2 0x0 00:24:00.530 [2024-07-12 19:15:48.448322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.530 [2024-07-12 19:15:48.448333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.530 [2024-07-12 19:15:48.448338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98584 len:8 PRP1 0x0 PRP2 0x0 00:24:00.530 [2024-07-12 19:15:48.448344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.530 [2024-07-12 19:15:48.448355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.530 [2024-07-12 19:15:48.448360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98592 len:8 PRP1 0x0 PRP2 0x0 00:24:00.530 [2024-07-12 19:15:48.448366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.530 [2024-07-12 19:15:48.448377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.530 [2024-07-12 19:15:48.448382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98600 len:8 PRP1 0x0 PRP2 0x0 00:24:00.530 [2024-07-12 19:15:48.448388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.530 [2024-07-12 19:15:48.448399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.530 [2024-07-12 19:15:48.448404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98608 len:8 PRP1 0x0 PRP2 0x0 00:24:00.530 [2024-07-12 19:15:48.448411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.530 [2024-07-12 19:15:48.448422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.530 [2024-07-12 19:15:48.448427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98616 len:8 PRP1 0x0 PRP2 0x0 00:24:00.530 [2024-07-12 19:15:48.448435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.530 [2024-07-12 19:15:48.448447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.530 [2024-07-12 19:15:48.448453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98624 len:8 PRP1 0x0 PRP2 0x0 00:24:00.530 [2024-07-12 19:15:48.448459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.530 [2024-07-12 19:15:48.448470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.530 [2024-07-12 19:15:48.448476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98632 len:8 PRP1 0x0 PRP2 0x0 00:24:00.530 [2024-07-12 19:15:48.448482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.530 [2024-07-12 19:15:48.448493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.530 [2024-07-12 19:15:48.448499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98640 len:8 PRP1 0x0 PRP2 0x0 00:24:00.530 [2024-07-12 19:15:48.448505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.530 [2024-07-12 19:15:48.448511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.530 [2024-07-12 19:15:48.448516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.530 [2024-07-12 19:15:48.448521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97856 len:8 PRP1 0x0 PRP2 0x0 00:24:00.530 [2024-07-12 19:15:48.448527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:48.448534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.531 [2024-07-12 19:15:48.448539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.531 [2024-07-12 19:15:48.448544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97864 len:8 PRP1 0x0 PRP2 0x0 00:24:00.531 [2024-07-12 19:15:48.448550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:48.448556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.531 [2024-07-12 19:15:48.448561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.531 [2024-07-12 19:15:48.448566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97872 len:8 PRP1 0x0 PRP2 0x0 00:24:00.531 [2024-07-12 19:15:48.448572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:48.448579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.531 [2024-07-12 19:15:48.448584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.531 [2024-07-12 19:15:48.448589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97880 len:8 PRP1 0x0 PRP2 0x0 00:24:00.531 [2024-07-12 19:15:48.448596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:48.448602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.531 [2024-07-12 19:15:48.448607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.531 [2024-07-12 19:15:48.448614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97888 len:8 PRP1 0x0 PRP2 0x0 00:24:00.531 [2024-07-12 19:15:48.448621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:48.448627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.531 [2024-07-12 19:15:48.448634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.531 [2024-07-12 19:15:48.448639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97896 len:8 PRP1 0x0 PRP2 0x0 00:24:00.531 [2024-07-12 19:15:48.448645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:48.448686] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x898300 was disconnected and freed. reset controller. 00:24:00.531 [2024-07-12 19:15:48.448694] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:00.531 [2024-07-12 19:15:48.448713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.531 [2024-07-12 19:15:48.448720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:48.448727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.531 [2024-07-12 19:15:48.448734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:48.448741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.531 [2024-07-12 19:15:48.448748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:48.448755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.531 [2024-07-12 19:15:48.448761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:48.448768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.531 [2024-07-12 19:15:48.448805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a540 (9): Bad file descriptor 00:24:00.531 [2024-07-12 19:15:48.451621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.531 [2024-07-12 19:15:48.485650] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:00.531 [2024-07-12 19:15:51.980889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.531 [2024-07-12 19:15:51.980922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.980937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.531 [2024-07-12 19:15:51.980946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.980956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.531 [2024-07-12 19:15:51.980962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.980971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.531 [2024-07-12 19:15:51.980982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.980991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.531 [2024-07-12 19:15:51.980998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.531 [2024-07-12 19:15:51.981013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.531 [2024-07-12 19:15:51.981027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.531 [2024-07-12 19:15:51.981042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.531 [2024-07-12 19:15:51.981056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.531 [2024-07-12 19:15:51.981070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.531 [2024-07-12 19:15:51.981085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.531 [2024-07-12 19:15:51.981100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.531 [2024-07-12 19:15:51.981115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.531 [2024-07-12 19:15:51.981129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.531 [2024-07-12 19:15:51.981144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.531 [2024-07-12 19:15:51.981158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.531 [2024-07-12 19:15:51.981175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.531 [2024-07-12 19:15:51.981189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.531 [2024-07-12 19:15:51.981204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.531 [2024-07-12 19:15:51.981218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.531 [2024-07-12 19:15:51.981232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.532 [2024-07-12 19:15:51.981240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.532 [2024-07-12 19:15:51.981255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.532 [2024-07-12 19:15:51.981270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.532 [2024-07-12 19:15:51.981284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.532 [2024-07-12 19:15:51.981837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.532 [2024-07-12 19:15:51.981844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.981851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.981858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.981865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.981873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.981880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.981888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.981894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.981902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.981910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.981918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.981924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.981932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.981939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.981947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.981953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.981961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.981967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.981975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.981982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.981989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.981996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.982010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.982024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.982038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.982052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.982066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.982080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.982096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.533 [2024-07-12 19:15:51.982454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.533 [2024-07-12 19:15:51.982472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.533 [2024-07-12 19:15:51.982480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:51.982700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.534 [2024-07-12 19:15:51.982714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.534 [2024-07-12 19:15:51.982730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.534 [2024-07-12 19:15:51.982744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.534 [2024-07-12 19:15:51.982759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.534 [2024-07-12 19:15:51.982773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.534 [2024-07-12 19:15:51.982788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.534 [2024-07-12 19:15:51.982803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.534 [2024-07-12 19:15:51.982825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.534 [2024-07-12 19:15:51.982831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32464 len:8 PRP1 0x0 PRP2 0x0 00:24:00.534 [2024-07-12 19:15:51.982837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982880] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa45380 was disconnected and freed. reset controller. 00:24:00.534 [2024-07-12 19:15:51.982888] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:00.534 [2024-07-12 19:15:51.982909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.534 [2024-07-12 19:15:51.982918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.534 [2024-07-12 19:15:51.982932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.534 [2024-07-12 19:15:51.982946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.534 [2024-07-12 19:15:51.982959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:51.982966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.534 [2024-07-12 19:15:51.985798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.534 [2024-07-12 19:15:51.985827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a540 (9): Bad file descriptor 00:24:00.534 [2024-07-12 19:15:52.057741] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:00.534 [2024-07-12 19:15:56.398820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.398853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.398867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.398875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.398883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.398890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.398899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.398905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.398913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.398920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.398927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.398934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.398942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.398948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.398956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.398967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.398975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.398981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.398990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.398997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.399005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.399012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.399020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.399026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.399034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.399040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.399048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.399055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.399063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.399069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.399077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.399083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.399091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.399098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.399106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.399112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.399120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.399126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.399134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.399141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.399150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.399157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.399165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.399171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.399182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.399189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.534 [2024-07-12 19:15:56.399197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.534 [2024-07-12 19:15:56.399203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.535 [2024-07-12 19:15:56.399664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.399987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.399995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.400001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.400009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.400015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.400023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.400030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.400038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.400044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.400053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.400059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.400067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.400073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.400083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.400089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.535 [2024-07-12 19:15:56.400097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.535 [2024-07-12 19:15:56.400104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.536 [2024-07-12 19:15:56.400485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.536 [2024-07-12 19:15:56.400714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:00.536 [2024-07-12 19:15:56.400740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:00.536 [2024-07-12 19:15:56.400745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57904 len:8 PRP1 0x0 PRP2 0x0 00:24:00.536 [2024-07-12 19:15:56.400754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400797] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x875d60 was disconnected and freed. reset controller. 00:24:00.536 [2024-07-12 19:15:56.400805] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:00.536 [2024-07-12 19:15:56.400825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.536 [2024-07-12 19:15:56.400832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.536 [2024-07-12 19:15:56.400846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.536 [2024-07-12 19:15:56.400859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.536 [2024-07-12 19:15:56.400873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.536 [2024-07-12 19:15:56.400881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.536 [2024-07-12 19:15:56.403735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.536 [2024-07-12 19:15:56.403765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a540 (9): Bad file descriptor 00:24:00.536 [2024-07-12 19:15:56.480560] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:00.536 00:24:00.536 Latency(us) 00:24:00.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.536 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:00.536 Verification LBA range: start 0x0 length 0x4000 00:24:00.536 NVMe0n1 : 15.01 11097.58 43.35 547.69 0.00 10969.20 427.41 27810.06 00:24:00.536 =================================================================================================================== 00:24:00.536 Total : 11097.58 43.35 547.69 0.00 10969.20 427.41 27810.06 00:24:00.536 Received shutdown signal, test time was about 15.000000 seconds 00:24:00.536 00:24:00.536 Latency(us) 00:24:00.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.536 =================================================================================================================== 00:24:00.536 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.536 19:16:02 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:00.536 19:16:02 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:00.536 19:16:02 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:00.536 19:16:02 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=403844 00:24:00.536 19:16:02 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 403844 /var/tmp/bdevperf.sock 00:24:00.536 19:16:02 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:00.536 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 403844 ']' 00:24:00.536 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.536 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.536 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.536 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.536 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:01.103 19:16:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.103 19:16:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:01.103 19:16:03 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:01.362 [2024-07-12 19:16:03.686047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:01.362 19:16:03 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:01.362 [2024-07-12 19:16:03.862486] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:01.362 19:16:03 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:01.621 NVMe0n1 00:24:01.621 19:16:04 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:02.187 00:24:02.187 19:16:04 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:02.187 00:24:02.187 19:16:04 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:02.187 19:16:04 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:02.446 19:16:04 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:02.704 19:16:05 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:05.992 19:16:08 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:05.992 19:16:08 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:05.992 19:16:08 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:05.992 19:16:08 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=404775 00:24:05.992 19:16:08 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 404775 00:24:06.931 0 00:24:06.931 19:16:09 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:06.931 [2024-07-12 19:16:02.711008] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:24:06.931 [2024-07-12 19:16:02.711061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid403844 ] 00:24:06.931 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.931 [2024-07-12 19:16:02.779150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.931 [2024-07-12 19:16:02.848276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.931 [2024-07-12 19:16:05.074635] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:06.931 [2024-07-12 19:16:05.074682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.931 [2024-07-12 19:16:05.074694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.931 [2024-07-12 19:16:05.074702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.931 [2024-07-12 19:16:05.074709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.931 [2024-07-12 19:16:05.074716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.931 [2024-07-12 19:16:05.074723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.931 [2024-07-12 19:16:05.074731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.931 [2024-07-12 19:16:05.074737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.931 [2024-07-12 19:16:05.074744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.931 [2024-07-12 19:16:05.074769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.931 [2024-07-12 19:16:05.074782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc3540 (9): Bad file descriptor 00:24:06.931 [2024-07-12 19:16:05.120160] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:06.931 Running I/O for 1 seconds... 00:24:06.931 00:24:06.931 Latency(us) 00:24:06.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.931 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:06.931 Verification LBA range: start 0x0 length 0x4000 00:24:06.931 NVMe0n1 : 1.01 11040.51 43.13 0.00 0.00 11534.73 2421.98 11340.58 00:24:06.931 =================================================================================================================== 00:24:06.931 Total : 11040.51 43.13 0.00 0.00 11534.73 2421.98 11340.58 00:24:06.931 19:16:09 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:06.931 19:16:09 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:07.190 19:16:09 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:07.449 19:16:09 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.449 19:16:09 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:07.449 19:16:09 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:07.708 19:16:10 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 403844 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 403844 ']' 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 403844 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 403844 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 403844' 00:24:11.024 killing process with pid 403844 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 403844 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 403844 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:11.024 19:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:11.282 rmmod nvme_tcp 00:24:11.282 rmmod nvme_fabrics 00:24:11.282 rmmod nvme_keyring 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 400732 ']' 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 400732 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 400732 ']' 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 400732 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 400732 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 400732' 00:24:11.282 killing process with pid 400732 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 400732 00:24:11.282 19:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 400732 00:24:11.541 19:16:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:11.541 19:16:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:11.541 19:16:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:11.541 19:16:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:11.541 19:16:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:11.541 19:16:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.541 19:16:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:11.541 19:16:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.079 19:16:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:14.079 00:24:14.079 real 0m38.229s 00:24:14.079 user 2m2.418s 00:24:14.079 sys 0m7.525s 00:24:14.079 19:16:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:14.079 19:16:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:14.079 ************************************ 00:24:14.079 END TEST nvmf_failover 00:24:14.079 ************************************ 00:24:14.079 19:16:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:14.079 19:16:16 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:14.079 19:16:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:14.079 19:16:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.079 19:16:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.079 ************************************ 00:24:14.079 START TEST nvmf_host_discovery 00:24:14.079 ************************************ 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:14.079 * Looking for test storage... 00:24:14.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.079 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.080 19:16:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:19.358 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:19.358 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.358 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:19.359 Found net devices under 0000:86:00.0: cvl_0_0 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:19.359 Found net devices under 0000:86:00.1: cvl_0_1 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:19.359 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.618 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.618 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.618 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:19.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:24:19.618 00:24:19.618 --- 10.0.0.2 ping statistics --- 00:24:19.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.618 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:24:19.618 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:24:19.618 00:24:19.618 --- 10.0.0.1 ping statistics --- 00:24:19.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.618 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:24:19.618 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.618 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:24:19.618 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:19.618 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.618 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:19.618 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:19.618 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.618 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:19.618 19:16:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:19.618 19:16:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:19.618 19:16:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:19.618 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:19.618 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.618 19:16:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=409044 00:24:19.618 19:16:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 409044 00:24:19.618 19:16:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:19.618 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 409044 ']' 00:24:19.618 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.618 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.618 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.618 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.618 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.618 [2024-07-12 19:16:22.080080] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:24:19.618 [2024-07-12 19:16:22.080125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.618 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.618 [2024-07-12 19:16:22.148842] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.878 [2024-07-12 19:16:22.226409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.878 [2024-07-12 19:16:22.226448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.878 [2024-07-12 19:16:22.226455] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.878 [2024-07-12 19:16:22.226461] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.878 [2024-07-12 19:16:22.226466] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.878 [2024-07-12 19:16:22.226500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.447 [2024-07-12 19:16:22.930462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.447 [2024-07-12 19:16:22.942627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.447 null0 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.447 null1 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=409236 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 409236 /tmp/host.sock 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 409236 ']' 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:20.447 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.447 19:16:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.706 [2024-07-12 19:16:23.019644] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:24:20.706 [2024-07-12 19:16:23.019684] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid409236 ] 00:24:20.706 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.706 [2024-07-12 19:16:23.087333] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.706 [2024-07-12 19:16:23.160650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.273 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.533 19:16:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:21.533 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.533 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:21.533 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:21.533 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.533 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.533 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.533 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:21.533 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:21.533 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:21.533 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:21.533 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.533 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.533 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:21.533 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.793 [2024-07-12 19:16:24.157891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:24:21.793 19:16:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:22.362 [2024-07-12 19:16:24.880806] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:22.362 [2024-07-12 19:16:24.880826] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:22.362 [2024-07-12 19:16:24.880837] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:22.622 [2024-07-12 19:16:24.967106] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:22.881 [2024-07-12 19:16:25.191574] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:22.881 [2024-07-12 19:16:25.191594] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:22.881 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.141 [2024-07-12 19:16:25.657966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:23.141 [2024-07-12 19:16:25.658380] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:23.141 [2024-07-12 19:16:25.658401] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:23.141 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.401 [2024-07-12 19:16:25.745652] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:23.401 19:16:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:23.401 [2024-07-12 19:16:25.805018] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:23.401 [2024-07-12 19:16:25.805035] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:23.401 [2024-07-12 19:16:25.805040] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:24.337 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.338 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.338 [2024-07-12 19:16:26.902353] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:24.338 [2024-07-12 19:16:26.902375] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:24.338 [2024-07-12 19:16:26.903480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.338 [2024-07-12 19:16:26.903495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.338 [2024-07-12 19:16:26.903503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.338 [2024-07-12 19:16:26.903510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.338 [2024-07-12 19:16:26.903517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.338 [2024-07-12 19:16:26.903524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.338 [2024-07-12 19:16:26.903531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.338 [2024-07-12 19:16:26.903537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.338 [2024-07-12 19:16:26.903543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313f10 is same with the state(5) to be set 00:24:24.597 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.597 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:24.597 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:24.597 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:24.597 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:24.597 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:24.597 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:24.597 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:24.597 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:24.597 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.597 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:24.597 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.597 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:24.597 [2024-07-12 19:16:26.913494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1313f10 (9): Bad file descriptor 00:24:24.597 [2024-07-12 19:16:26.923532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:24.597 [2024-07-12 19:16:26.923801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.597 [2024-07-12 19:16:26.923815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1313f10 with addr=10.0.0.2, port=4420 00:24:24.597 [2024-07-12 19:16:26.923822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313f10 is same with the state(5) to be set 00:24:24.597 [2024-07-12 19:16:26.923834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1313f10 (9): Bad file descriptor 00:24:24.597 [2024-07-12 19:16:26.923843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:24.597 [2024-07-12 19:16:26.923849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:24.597 [2024-07-12 19:16:26.923857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:24.597 [2024-07-12 19:16:26.923867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.597 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.597 [2024-07-12 19:16:26.933587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:24.597 [2024-07-12 19:16:26.933763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.597 [2024-07-12 19:16:26.933775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1313f10 with addr=10.0.0.2, port=4420 00:24:24.597 [2024-07-12 19:16:26.933781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313f10 is same with the state(5) to be set 00:24:24.597 [2024-07-12 19:16:26.933791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1313f10 (9): Bad file descriptor 00:24:24.597 [2024-07-12 19:16:26.933800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:24.597 [2024-07-12 19:16:26.933806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:24.597 [2024-07-12 19:16:26.933812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:24.597 [2024-07-12 19:16:26.933821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.597 [2024-07-12 19:16:26.943635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:24.598 [2024-07-12 19:16:26.943808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.598 [2024-07-12 19:16:26.943819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1313f10 with addr=10.0.0.2, port=4420 00:24:24.598 [2024-07-12 19:16:26.943826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313f10 is same with the state(5) to be set 00:24:24.598 [2024-07-12 19:16:26.943839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1313f10 (9): Bad file descriptor 00:24:24.598 [2024-07-12 19:16:26.943848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:24.598 [2024-07-12 19:16:26.943853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:24.598 [2024-07-12 19:16:26.943859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:24.598 [2024-07-12 19:16:26.943868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.598 [2024-07-12 19:16:26.953684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:24.598 [2024-07-12 19:16:26.953869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.598 [2024-07-12 19:16:26.953882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1313f10 with addr=10.0.0.2, port=4420 00:24:24.598 [2024-07-12 19:16:26.953889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313f10 is same with the state(5) to be set 00:24:24.598 [2024-07-12 19:16:26.953899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1313f10 (9): Bad file descriptor 00:24:24.598 [2024-07-12 19:16:26.953908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:24.598 [2024-07-12 19:16:26.953913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:24.598 [2024-07-12 19:16:26.953920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:24.598 [2024-07-12 19:16:26.953928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:24.598 [2024-07-12 19:16:26.963737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:24.598 [2024-07-12 19:16:26.963895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.598 [2024-07-12 19:16:26.963908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1313f10 with addr=10.0.0.2, port=4420 00:24:24.598 [2024-07-12 19:16:26.963914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313f10 is same with the state(5) to be set 00:24:24.598 [2024-07-12 19:16:26.963924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1313f10 (9): Bad file descriptor 00:24:24.598 [2024-07-12 19:16:26.963932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:24.598 [2024-07-12 19:16:26.963938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:24.598 [2024-07-12 19:16:26.963944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:24.598 [2024-07-12 19:16:26.963953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:24.598 [2024-07-12 19:16:26.973787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:24.598 [2024-07-12 19:16:26.973988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.598 [2024-07-12 19:16:26.974001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1313f10 with addr=10.0.0.2, port=4420 00:24:24.598 [2024-07-12 19:16:26.974008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313f10 is same with the state(5) to be set 00:24:24.598 [2024-07-12 19:16:26.974018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1313f10 (9): Bad file descriptor 00:24:24.598 [2024-07-12 19:16:26.974034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:24.598 [2024-07-12 19:16:26.974041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:24.598 [2024-07-12 19:16:26.974047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:24.598 [2024-07-12 19:16:26.974056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.598 [2024-07-12 19:16:26.983840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:24.598 [2024-07-12 19:16:26.984090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.598 [2024-07-12 19:16:26.984102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1313f10 with addr=10.0.0.2, port=4420 00:24:24.598 [2024-07-12 19:16:26.984109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1313f10 is same with the state(5) to be set 00:24:24.598 [2024-07-12 19:16:26.984119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1313f10 (9): Bad file descriptor 00:24:24.598 [2024-07-12 19:16:26.984134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:24.598 [2024-07-12 19:16:26.984141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:24.598 [2024-07-12 19:16:26.984147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:24.598 [2024-07-12 19:16:26.984156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.598 [2024-07-12 19:16:26.990151] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:24.598 [2024-07-12 19:16:26.990166] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:24.598 19:16:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:24.598 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.599 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:24.599 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.858 19:16:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.793 [2024-07-12 19:16:28.308674] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:25.793 [2024-07-12 19:16:28.308690] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:25.793 [2024-07-12 19:16:28.308700] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:26.051 [2024-07-12 19:16:28.394968] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:26.051 [2024-07-12 19:16:28.455115] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:26.051 [2024-07-12 19:16:28.455142] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.051 request: 00:24:26.051 { 00:24:26.051 "name": "nvme", 00:24:26.051 "trtype": "tcp", 00:24:26.051 "traddr": "10.0.0.2", 00:24:26.051 "adrfam": "ipv4", 00:24:26.051 "trsvcid": "8009", 00:24:26.051 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:26.051 "wait_for_attach": true, 00:24:26.051 "method": "bdev_nvme_start_discovery", 00:24:26.051 "req_id": 1 00:24:26.051 } 00:24:26.051 Got JSON-RPC error response 00:24:26.051 response: 00:24:26.051 { 00:24:26.051 "code": -17, 00:24:26.051 "message": "File exists" 00:24:26.051 } 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.051 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.052 request: 00:24:26.052 { 00:24:26.052 "name": "nvme_second", 00:24:26.052 "trtype": "tcp", 00:24:26.052 "traddr": "10.0.0.2", 00:24:26.052 "adrfam": "ipv4", 00:24:26.052 "trsvcid": "8009", 00:24:26.052 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:26.052 "wait_for_attach": true, 00:24:26.052 "method": "bdev_nvme_start_discovery", 00:24:26.052 "req_id": 1 00:24:26.052 } 00:24:26.052 Got JSON-RPC error response 00:24:26.052 response: 00:24:26.052 { 00:24:26.052 "code": -17, 00:24:26.052 "message": "File exists" 00:24:26.052 } 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:26.052 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.310 19:16:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.245 [2024-07-12 19:16:29.703596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.245 [2024-07-12 19:16:29.703623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132e100 with addr=10.0.0.2, port=8010 00:24:27.245 [2024-07-12 19:16:29.703635] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:27.245 [2024-07-12 19:16:29.703641] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:27.245 [2024-07-12 19:16:29.703647] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:28.178 [2024-07-12 19:16:30.706079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.178 [2024-07-12 19:16:30.706109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1350a00 with addr=10.0.0.2, port=8010 00:24:28.179 [2024-07-12 19:16:30.706124] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:28.179 [2024-07-12 19:16:30.706131] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:28.179 [2024-07-12 19:16:30.706138] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:29.555 [2024-07-12 19:16:31.708241] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:29.555 request: 00:24:29.555 { 00:24:29.555 "name": "nvme_second", 00:24:29.555 "trtype": "tcp", 00:24:29.555 "traddr": "10.0.0.2", 00:24:29.555 "adrfam": "ipv4", 00:24:29.555 "trsvcid": "8010", 00:24:29.555 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:29.555 "wait_for_attach": false, 00:24:29.555 "attach_timeout_ms": 3000, 00:24:29.555 "method": "bdev_nvme_start_discovery", 00:24:29.555 "req_id": 1 00:24:29.555 } 00:24:29.555 Got JSON-RPC error response 00:24:29.555 response: 00:24:29.555 { 00:24:29.555 "code": -110, 00:24:29.555 "message": "Connection timed out" 00:24:29.555 } 00:24:29.555 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 409236 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:29.556 rmmod nvme_tcp 00:24:29.556 rmmod nvme_fabrics 00:24:29.556 rmmod nvme_keyring 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 409044 ']' 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 409044 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 409044 ']' 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 409044 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 409044 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 409044' 00:24:29.556 killing process with pid 409044 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 409044 00:24:29.556 19:16:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 409044 00:24:29.556 19:16:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:29.556 19:16:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:29.556 19:16:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:29.556 19:16:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:29.556 19:16:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:29.556 19:16:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.556 19:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.556 19:16:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:32.089 00:24:32.089 real 0m17.939s 00:24:32.089 user 0m22.113s 00:24:32.089 sys 0m5.666s 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.089 ************************************ 00:24:32.089 END TEST nvmf_host_discovery 00:24:32.089 ************************************ 00:24:32.089 19:16:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:32.089 19:16:34 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:32.089 19:16:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:32.089 19:16:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:32.089 19:16:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:32.089 ************************************ 00:24:32.089 START TEST nvmf_host_multipath_status 00:24:32.089 ************************************ 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:32.089 * Looking for test storage... 00:24:32.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:32.089 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:32.090 19:16:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:37.364 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:37.364 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.364 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:37.365 Found net devices under 0000:86:00.0: cvl_0_0 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:37.365 Found net devices under 0000:86:00.1: cvl_0_1 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:37.365 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.624 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.624 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.624 19:16:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:37.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:24:37.624 00:24:37.624 --- 10.0.0.2 ping statistics --- 00:24:37.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.624 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:24:37.624 00:24:37.624 --- 10.0.0.1 ping statistics --- 00:24:37.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.624 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=414305 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 414305 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 414305 ']' 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:37.624 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:37.624 [2024-07-12 19:16:40.100889] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:24:37.624 [2024-07-12 19:16:40.100931] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.624 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.624 [2024-07-12 19:16:40.168358] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:37.883 [2024-07-12 19:16:40.247269] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.883 [2024-07-12 19:16:40.247303] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.883 [2024-07-12 19:16:40.247309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.883 [2024-07-12 19:16:40.247315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.883 [2024-07-12 19:16:40.247320] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.883 [2024-07-12 19:16:40.247382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.883 [2024-07-12 19:16:40.247382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.452 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:38.452 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:38.452 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:38.452 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:38.452 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:38.452 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.452 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=414305 00:24:38.452 19:16:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:38.711 [2024-07-12 19:16:41.094922] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.711 19:16:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:38.970 Malloc0 00:24:38.970 19:16:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:38.970 19:16:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:39.229 19:16:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.489 [2024-07-12 19:16:41.827629] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.489 19:16:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:39.489 [2024-07-12 19:16:42.016111] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:39.489 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=414630 00:24:39.489 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:39.489 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:39.489 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 414630 /var/tmp/bdevperf.sock 00:24:39.489 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 414630 ']' 00:24:39.489 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.489 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:39.489 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.489 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:39.489 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:40.057 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:40.057 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:40.057 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:40.057 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:40.316 Nvme0n1 00:24:40.316 19:16:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:40.574 Nvme0n1 00:24:40.574 19:16:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:40.574 19:16:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:43.106 19:16:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:43.106 19:16:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:43.106 19:16:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:43.106 19:16:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:44.043 19:16:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:44.043 19:16:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:44.043 19:16:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.043 19:16:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:44.302 19:16:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.302 19:16:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:44.302 19:16:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.302 19:16:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:44.561 19:16:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.561 19:16:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:44.561 19:16:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.561 19:16:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:44.561 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.561 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:44.561 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.561 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:44.820 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.820 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:44.820 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.820 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:45.079 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.079 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:45.079 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.079 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:45.339 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.339 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:45.339 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:45.339 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:45.598 19:16:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:46.535 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:46.535 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:46.535 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.535 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:46.794 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:46.794 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:46.794 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.794 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:47.052 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.052 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:47.052 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.052 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:47.312 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.312 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:47.312 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.312 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:47.312 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.312 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:47.312 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.312 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:47.571 19:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.571 19:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:47.571 19:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.571 19:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:47.830 19:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.830 19:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:47.830 19:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:48.089 19:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:48.089 19:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:49.467 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:49.467 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:49.467 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.467 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:49.467 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.467 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:49.467 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.467 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:49.726 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.726 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:49.726 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.726 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:49.726 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.726 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:49.726 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.726 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:49.984 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.984 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:49.985 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.985 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:50.243 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.243 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:50.243 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.243 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:50.500 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.500 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:50.500 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:50.500 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:50.758 19:16:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:51.694 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:51.694 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:51.694 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.694 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:51.952 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.952 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:51.952 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.952 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:52.211 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.211 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:52.211 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.211 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:52.470 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.470 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:52.470 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.470 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:52.470 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.470 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:52.470 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.470 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:52.728 19:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.728 19:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:52.728 19:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.728 19:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:52.987 19:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.987 19:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:52.987 19:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:52.987 19:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:53.247 19:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:54.183 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:54.183 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:54.183 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.183 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:54.442 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.442 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:54.442 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.442 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:54.701 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.701 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:54.701 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.701 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:54.960 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.960 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:54.960 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.960 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:54.960 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.960 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:54.960 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.960 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:55.218 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:55.218 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:55.218 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.218 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:55.477 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:55.477 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:55.477 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:55.477 19:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:55.736 19:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:56.673 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:56.673 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:56.673 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.673 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:56.932 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:56.932 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:56.932 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.932 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:57.190 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.190 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:57.190 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.190 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:57.448 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.448 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:57.448 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.448 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:57.448 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.448 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:57.448 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:57.448 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.707 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:57.707 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:57.707 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.707 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:57.964 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.964 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:58.222 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:58.222 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:58.222 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:58.480 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:59.414 19:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:59.414 19:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:59.414 19:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.414 19:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:59.671 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.671 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:59.671 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.671 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:59.928 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.928 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:59.928 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.928 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:00.186 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.186 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:00.186 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:00.186 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.186 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.186 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:00.186 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.186 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:00.444 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.444 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:00.444 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.444 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:00.702 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.702 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:00.702 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:00.962 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:00.962 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:02.339 19:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:02.339 19:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:02.339 19:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.339 19:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:02.339 19:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:02.339 19:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:02.339 19:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:02.339 19:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.339 19:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.339 19:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:02.598 19:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.598 19:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:02.598 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.598 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:02.598 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.598 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:02.858 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.858 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:02.858 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.858 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:03.117 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.117 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:03.117 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.117 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:03.117 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.118 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:03.118 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:03.376 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:03.635 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:04.572 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:04.572 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:04.572 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.572 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:04.831 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.831 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:04.831 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.831 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:05.090 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.090 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:05.090 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.090 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:05.090 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.090 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:05.348 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.348 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:05.348 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.348 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:05.348 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.348 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:05.606 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.606 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:05.606 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.606 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:05.864 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.864 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:05.864 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:06.123 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:06.123 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:07.499 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:07.499 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:07.499 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.499 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:07.499 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.499 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:07.499 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:07.500 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.500 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:07.500 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:07.500 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:07.500 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.757 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.757 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:07.757 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.757 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:08.016 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.016 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:08.016 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.016 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:08.275 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.275 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:08.275 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.275 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:08.275 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.275 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 414630 00:25:08.275 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 414630 ']' 00:25:08.275 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 414630 00:25:08.275 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:08.275 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:08.275 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 414630 00:25:08.557 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:08.557 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:08.557 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 414630' 00:25:08.557 killing process with pid 414630 00:25:08.557 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 414630 00:25:08.557 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 414630 00:25:08.557 Connection closed with partial response: 00:25:08.557 00:25:08.557 00:25:08.557 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 414630 00:25:08.557 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:08.557 [2024-07-12 19:16:42.085896] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:25:08.557 [2024-07-12 19:16:42.085943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414630 ] 00:25:08.557 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.557 [2024-07-12 19:16:42.152722] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.557 [2024-07-12 19:16:42.225710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.557 Running I/O for 90 seconds... 00:25:08.557 [2024-07-12 19:16:55.510881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.510922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.510942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.510951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.510964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.510972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.510984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.510991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.511009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.511028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.511046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.511065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.511083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.511103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.511127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.511146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.511164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.511183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.511202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.557 [2024-07-12 19:16:55.511221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.557 [2024-07-12 19:16:55.511245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.557 [2024-07-12 19:16:55.511416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.557 [2024-07-12 19:16:55.511437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.557 [2024-07-12 19:16:55.511456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:08.557 [2024-07-12 19:16:55.511468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.511475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.511494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.511512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.511937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.511944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.512196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.512218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.512242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.512261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.512279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.512298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.512316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.512334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.512434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.558 [2024-07-12 19:16:55.512887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.512905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.512923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.558 [2024-07-12 19:16:55.512943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:08.558 [2024-07-12 19:16:55.512954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.512961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.512973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.512980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.512993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.513000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.513019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.513564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.513585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.513604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.513630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.513649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.513668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.513686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.513705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.513724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.513743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.513764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.513783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.513801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.513821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.513839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.513858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.513877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.513896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.513915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.513934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.513953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.513972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.513984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.513992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.559 [2024-07-12 19:16:55.514928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.514982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.514989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.515001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.515008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.515020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.515027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.515039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.515045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.515057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.559 [2024-07-12 19:16:55.515064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:08.559 [2024-07-12 19:16:55.515077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.515912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.515924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.526921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.526943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.526952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.527308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.560 [2024-07-12 19:16:55.527340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.560 [2024-07-12 19:16:55.527964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:08.560 [2024-07-12 19:16:55.527981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.527990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.528410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.528975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.528984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.529009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.529035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.529061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.529087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.529112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.529138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.529164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.529190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.529216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.529247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.561 [2024-07-12 19:16:55.529662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.529688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.529715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.561 [2024-07-12 19:16:55.529741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:08.561 [2024-07-12 19:16:55.529757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.529766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.529783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.529792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.529809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.529818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.529834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.529843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.529860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.529871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.529887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.529897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.529913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.529922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.529939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.529948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.529965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.529974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.531989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.531998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.562 [2024-07-12 19:16:55.532024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.562 [2024-07-12 19:16:55.532580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:08.562 [2024-07-12 19:16:55.532597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.532605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.532622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.532631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.532648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.532657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.532673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.532683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.532700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.532709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.532726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.532735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.532753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.532762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.533103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.533130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.533159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.533184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.533210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.533240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.533266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.533291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.533317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.533343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.533368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.533394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.533419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.533984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.533994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.534020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.534046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.534072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.534099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.534125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.534151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.534179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.534204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.534234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.534260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.534286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.534312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.534338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.534363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.534389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.534414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.534442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.534468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.534494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.534519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.534536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.539987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.540006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.540015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.540031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.540040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.540057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.540066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.540082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.540091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.540107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.563 [2024-07-12 19:16:55.540115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.540131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.540139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.540155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.540164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.540181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.563 [2024-07-12 19:16:55.540190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:08.563 [2024-07-12 19:16:55.540205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.540214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.540232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.540241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.540257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.540265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.540280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.540289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.540304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.540313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.540328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.540337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.540352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.540361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.541794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.541822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.541846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.541870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.541894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.541917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.541941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.541965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.541980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.541989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.564 [2024-07-12 19:16:55.542378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.542404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.542428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.542444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.542452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.543013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.543026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.543043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.543052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.543068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.543076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.543092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.543100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.543116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.543124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.543140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.564 [2024-07-12 19:16:55.543148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:08.564 [2024-07-12 19:16:55.543163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.543172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.543195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.543219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.543253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.543276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.543300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.543325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.543349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.543991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.543999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.544023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.544047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.544070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.544094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.565 [2024-07-12 19:16:55.544118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:08.565 [2024-07-12 19:16:55.544480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.565 [2024-07-12 19:16:55.544489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.544505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.544513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.544529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.544537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.544553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.544561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.544577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.544585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.544600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.544610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.544625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.544635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.544652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.544660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.544676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.544685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.544700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.544708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.544724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.544734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.545981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.545996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.546004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.546031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.546054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.546078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.546102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.546125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.566 [2024-07-12 19:16:55.546149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:08.566 [2024-07-12 19:16:55.546647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.566 [2024-07-12 19:16:55.546655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.546670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.546679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.546707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.546718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.546735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.546745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.546762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.546771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.546789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.546798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.547829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.547856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.547883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.547912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.547939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.547966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.547984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.547993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.567 [2024-07-12 19:16:55.548708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.548734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.548762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.548789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.548815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.548842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.548869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.548895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.548924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.548951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.548978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:08.567 [2024-07-12 19:16:55.548995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.567 [2024-07-12 19:16:55.549005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.549022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.549032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.549049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.549059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.549076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.549086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.549103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.549113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.549130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.549139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.549157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.549166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.549184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.549193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.549211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.549220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.549244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.549254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.549271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.549281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.549298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.549307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.549325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.549336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.549353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.549363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.568 [2024-07-12 19:16:55.550975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.550993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.551003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.551022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.551032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.551050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.551060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.551077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.551087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.551104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.551114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.551131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.551141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.551158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.551168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.551185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.551195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.551212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.551222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.551243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.551253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.551270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.551280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.551298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.551307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.551325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.568 [2024-07-12 19:16:55.551334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.568 [2024-07-12 19:16:55.551354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.551363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.551381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.551390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.551408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.551417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.551434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.551444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.551461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.551471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.551488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.551498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.551515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.551525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.551542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.551552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.551569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.551578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.551596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.551605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.551623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.551632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.551650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.551660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.552738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.552765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.552792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.552818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.552846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.552873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.552900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.552926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.552953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.552981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.552999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.569 [2024-07-12 19:16:55.553606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.553634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.553664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.553692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.553720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.553749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.569 [2024-07-12 19:16:55.553778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:08.569 [2024-07-12 19:16:55.553795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.553805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.553823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.553833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.553851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.553861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.553878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.553888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.553906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.553916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.553934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.553944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.553961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.553971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.553988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.554000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.554017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.554027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.554044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.554054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.554071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.554081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.554098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.554108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.554125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.554135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.554153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.554162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.554179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.554189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.554206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.554216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.554237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.554247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.570 [2024-07-12 19:16:55.555906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.555933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.555960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.555977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.555987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.556005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.556014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.556033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.556042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.556060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.556070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.556087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.556097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.556114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.556126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.556144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.556153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.556171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.556181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.556200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.556210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.556232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.556242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.556260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.556270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.556287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.556297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:08.570 [2024-07-12 19:16:55.556314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.570 [2024-07-12 19:16:55.556324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.556342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.556351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.556369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.556380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.556397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.556407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.556425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.556434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.556452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.556464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.556481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.556491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.556509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.556519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.556536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.556546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.556565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.556574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.557439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.557992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.557998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.558010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.558017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.558029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.558036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.558049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.571 [2024-07-12 19:16:55.558055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.558068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.558075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.558087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.558094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.558106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.558113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.558125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.558132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.558144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.558150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.558163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.558169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.558181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.571 [2024-07-12 19:16:55.558188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:08.571 [2024-07-12 19:16:55.558200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.558207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.558221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.558231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.558243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.558250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.558262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.558268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.558281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.558287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.558299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.558306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.558318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.558325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.558337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.558343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.558355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.558362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.558374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.558381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.558393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.558400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.558412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.558419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.558431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.558437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.558452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.558459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.558471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.558478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.572 [2024-07-12 19:16:55.559658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.559982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.559995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.560001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.560013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.560020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.560032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.560039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.560051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.560059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:08.572 [2024-07-12 19:16:55.560071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.572 [2024-07-12 19:16:55.560077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.560097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.560554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.560879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.560898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.560917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.560936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.560955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.560973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.560986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.560992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.573 [2024-07-12 19:16:55.561497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.561517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.561537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.561558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.561577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.561596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.561608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.561615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.562037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.562048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.562061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.562068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.562080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.562087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.562100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.562106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:08.573 [2024-07-12 19:16:55.562119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.573 [2024-07-12 19:16:55.562125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.562246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.562265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.562286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.562305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.562324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.562344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.562363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.562933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.562940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.563375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.563396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.574 [2024-07-12 19:16:55.563860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.563878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.563897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:08.574 [2024-07-12 19:16:55.563909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.574 [2024-07-12 19:16:55.563916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.563928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.563935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.563947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.563954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.563966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.563972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.563984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.563991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.564009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.564028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.564049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.564067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.564087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.564105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.564124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.564144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.564476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.564496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.564989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.564996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.565015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.565034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.565054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.565073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.565091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.565111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.575 [2024-07-12 19:16:55.565707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.565726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.565745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.565764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.565783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.565801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.565822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:08.575 [2024-07-12 19:16:55.565834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.575 [2024-07-12 19:16:55.565840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.565852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.565861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.565873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.565879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.565891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.565898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.565910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.565917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.565929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.565935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.565947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.565954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.565966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.565972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.565984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.565991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.566838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.566857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.566876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.566897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.566916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.566935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.566954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.566973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.566985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.566992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.576 [2024-07-12 19:16:55.567304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.567325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.567344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.567366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.576 [2024-07-12 19:16:55.567390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:08.576 [2024-07-12 19:16:55.567645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.567656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.567678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.567697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.567717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.567736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.567755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.567773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.567792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.567811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.567829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.567848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.567869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.567888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.567906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.567926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.567944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.567963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.567982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.567994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.568435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.568442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.571941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.571951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.571964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.571970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.571982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.571989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.572688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.572707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.572728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.572748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.572767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.572786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.577 [2024-07-12 19:16:55.572806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.577 [2024-07-12 19:16:55.572844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:08.577 [2024-07-12 19:16:55.572857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.572863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.572875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.572882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.572894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.572902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.572914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.572923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.572936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.572943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.572955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.572961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.572974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.572981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.572992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.572999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.578 [2024-07-12 19:16:55.573872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.573922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.573928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.574123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.574132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.574160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.574167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.574184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.574191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:08.578 [2024-07-12 19:16:55.574208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.578 [2024-07-12 19:16:55.574215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:16:55.574243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:16:55.574267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:16:55.574294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:16:55.574318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:16:55.574342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:16:55.574365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:16:55.574389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:16:55.574413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:16:55.574437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:16:55.574460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.574982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.574989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.575006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.575013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.575031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.575037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.575054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.575061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.575078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.575085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.575102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.575109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.575126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.575133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.575150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.575157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.575174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.575182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.575199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.575206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.575223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:16:55.575235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.575252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:16:55.575259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.575277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:16:55.575284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:16:55.575384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:16:55.575392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:17:08.611075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:17:08.611117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:17:08.611138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:17:08.611157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:17:08.611176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:17:08.611195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:17:08.611221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:17:08.611245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:17:08.611264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:17:08.611282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:17:08.611301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.579 [2024-07-12 19:17:08.611319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:17:08.611339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:17:08.611358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:17:08.611377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:17:08.611396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.579 [2024-07-12 19:17:08.611415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:08.579 [2024-07-12 19:17:08.611562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.580 [2024-07-12 19:17:08.611571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:08.580 [2024-07-12 19:17:08.611584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.580 [2024-07-12 19:17:08.611591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:08.580 [2024-07-12 19:17:08.612686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.580 [2024-07-12 19:17:08.612702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:08.580 [2024-07-12 19:17:08.612715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.580 [2024-07-12 19:17:08.612723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:08.580 [2024-07-12 19:17:08.612735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.580 [2024-07-12 19:17:08.612742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:08.580 [2024-07-12 19:17:08.612754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.580 [2024-07-12 19:17:08.612760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:08.580 [2024-07-12 19:17:08.612772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.580 [2024-07-12 19:17:08.612779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.580 [2024-07-12 19:17:08.612791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.580 [2024-07-12 19:17:08.612797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.580 [2024-07-12 19:17:08.612809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.580 [2024-07-12 19:17:08.612816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.580 [2024-07-12 19:17:08.612829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.580 [2024-07-12 19:17:08.612835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:08.580 [2024-07-12 19:17:08.612957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.580 [2024-07-12 19:17:08.612968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:08.580 [2024-07-12 19:17:08.612982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.580 [2024-07-12 19:17:08.612989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:08.580 [2024-07-12 19:17:08.613002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:08.580 [2024-07-12 19:17:08.613008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:08.580 Received shutdown signal, test time was about 27.603692 seconds 00:25:08.580 00:25:08.580 Latency(us) 00:25:08.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.580 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:08.580 Verification LBA range: start 0x0 length 0x4000 00:25:08.580 Nvme0n1 : 27.60 10513.85 41.07 0.00 0.00 12153.22 132.67 3078254.41 00:25:08.580 =================================================================================================================== 00:25:08.580 Total : 10513.85 41.07 0.00 0.00 12153.22 132.67 3078254.41 00:25:08.580 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:08.839 rmmod nvme_tcp 00:25:08.839 rmmod nvme_fabrics 00:25:08.839 rmmod nvme_keyring 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 414305 ']' 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 414305 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 414305 ']' 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 414305 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 414305 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 414305' 00:25:08.839 killing process with pid 414305 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 414305 00:25:08.839 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 414305 00:25:09.098 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:09.098 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:09.098 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:09.098 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:09.098 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:09.098 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.098 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.098 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.635 19:17:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:11.635 00:25:11.635 real 0m39.424s 00:25:11.635 user 1m46.012s 00:25:11.635 sys 0m10.880s 00:25:11.635 19:17:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:11.635 19:17:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:11.635 ************************************ 00:25:11.635 END TEST nvmf_host_multipath_status 00:25:11.635 ************************************ 00:25:11.635 19:17:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:11.635 19:17:13 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:11.635 19:17:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:11.635 19:17:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:11.635 19:17:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:11.635 ************************************ 00:25:11.635 START TEST nvmf_discovery_remove_ifc 00:25:11.635 ************************************ 00:25:11.635 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:11.635 * Looking for test storage... 00:25:11.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.635 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.635 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:11.635 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.635 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.635 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.635 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.635 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.635 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.635 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.635 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:25:11.636 19:17:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:16.911 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:16.911 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:16.911 Found net devices under 0000:86:00.0: cvl_0_0 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:16.911 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:16.912 Found net devices under 0000:86:00.1: cvl_0_1 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:16.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:25:16.912 00:25:16.912 --- 10.0.0.2 ping statistics --- 00:25:16.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.912 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:25:16.912 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:25:17.171 00:25:17.171 --- 10.0.0.1 ping statistics --- 00:25:17.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.171 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=423082 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 423082 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 423082 ']' 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:17.171 19:17:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.171 [2024-07-12 19:17:19.571194] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:25:17.171 [2024-07-12 19:17:19.571252] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.171 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.171 [2024-07-12 19:17:19.641095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.171 [2024-07-12 19:17:19.719042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.171 [2024-07-12 19:17:19.719075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.171 [2024-07-12 19:17:19.719083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.171 [2024-07-12 19:17:19.719092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.171 [2024-07-12 19:17:19.719097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.171 [2024-07-12 19:17:19.719112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.108 [2024-07-12 19:17:20.417980] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.108 [2024-07-12 19:17:20.426099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:18.108 null0 00:25:18.108 [2024-07-12 19:17:20.458111] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=423172 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 423172 /tmp/host.sock 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 423172 ']' 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:18.108 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.108 19:17:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.108 [2024-07-12 19:17:20.524883] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:25:18.108 [2024-07-12 19:17:20.524925] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423172 ] 00:25:18.108 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.108 [2024-07-12 19:17:20.592123] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.108 [2024-07-12 19:17:20.672704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.045 19:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.045 19:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:19.045 19:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:19.045 19:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:19.045 19:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.045 19:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.045 19:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.045 19:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:19.045 19:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.045 19:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.045 19:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.045 19:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:19.045 19:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.045 19:17:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.981 [2024-07-12 19:17:22.476788] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:19.981 [2024-07-12 19:17:22.476808] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:19.981 [2024-07-12 19:17:22.476819] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:20.240 [2024-07-12 19:17:22.603205] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:20.240 [2024-07-12 19:17:22.660060] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:20.240 [2024-07-12 19:17:22.660102] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:20.240 [2024-07-12 19:17:22.660121] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:20.240 [2024-07-12 19:17:22.660133] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:20.240 [2024-07-12 19:17:22.660149] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:20.240 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.240 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:20.240 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:20.240 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.240 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:20.240 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.240 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:20.240 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:20.240 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:20.240 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.240 [2024-07-12 19:17:22.705771] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18f4e30 was disconnected and freed. delete nvme_qpair. 00:25:20.240 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:20.240 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:20.241 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:20.499 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:20.499 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:20.499 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.499 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:20.499 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.499 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:20.499 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:20.499 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:20.499 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.499 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:20.499 19:17:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:21.435 19:17:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:21.435 19:17:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.435 19:17:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:21.435 19:17:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.435 19:17:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:21.435 19:17:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.435 19:17:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:21.435 19:17:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.435 19:17:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:21.435 19:17:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:22.370 19:17:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:22.370 19:17:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.370 19:17:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:22.370 19:17:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.370 19:17:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:22.370 19:17:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.370 19:17:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:22.628 19:17:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.628 19:17:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:22.628 19:17:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:23.564 19:17:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:23.564 19:17:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.564 19:17:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:23.564 19:17:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.564 19:17:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:23.564 19:17:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.564 19:17:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:23.564 19:17:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.564 19:17:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:23.564 19:17:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:24.500 19:17:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.500 19:17:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.500 19:17:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.500 19:17:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.500 19:17:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.500 19:17:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.500 19:17:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.500 19:17:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.759 19:17:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:24.759 19:17:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.705 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.705 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.705 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.705 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.705 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.705 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.705 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.705 [2024-07-12 19:17:28.101474] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:25.705 [2024-07-12 19:17:28.101520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.705 [2024-07-12 19:17:28.101531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.705 [2024-07-12 19:17:28.101540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.705 [2024-07-12 19:17:28.101547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.705 [2024-07-12 19:17:28.101554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.705 [2024-07-12 19:17:28.101560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.706 [2024-07-12 19:17:28.101567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.706 [2024-07-12 19:17:28.101573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.706 [2024-07-12 19:17:28.101580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.706 [2024-07-12 19:17:28.101586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.706 [2024-07-12 19:17:28.101592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bb690 is same with the state(5) to be set 00:25:25.706 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.706 [2024-07-12 19:17:28.111496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18bb690 (9): Bad file descriptor 00:25:25.706 [2024-07-12 19:17:28.121535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:25.706 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:25.706 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:26.641 19:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:26.641 19:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.641 19:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:26.641 19:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.641 19:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:26.641 19:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:26.641 19:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:26.641 [2024-07-12 19:17:29.186276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:26.641 [2024-07-12 19:17:29.186363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bb690 with addr=10.0.0.2, port=4420 00:25:26.641 [2024-07-12 19:17:29.186396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bb690 is same with the state(5) to be set 00:25:26.641 [2024-07-12 19:17:29.186458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18bb690 (9): Bad file descriptor 00:25:26.641 [2024-07-12 19:17:29.187417] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:26.641 [2024-07-12 19:17:29.187465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:26.641 [2024-07-12 19:17:29.187486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:26.641 [2024-07-12 19:17:29.187509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:26.641 [2024-07-12 19:17:29.187569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:26.641 [2024-07-12 19:17:29.187593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:26.641 19:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.898 19:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:26.898 19:17:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:27.832 [2024-07-12 19:17:30.190101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:27.832 [2024-07-12 19:17:30.190136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:27.832 [2024-07-12 19:17:30.190144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:27.832 [2024-07-12 19:17:30.190152] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:27.832 [2024-07-12 19:17:30.190167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.832 [2024-07-12 19:17:30.190186] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:27.832 [2024-07-12 19:17:30.190214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.832 [2024-07-12 19:17:30.190227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.832 [2024-07-12 19:17:30.190237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.832 [2024-07-12 19:17:30.190244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.832 [2024-07-12 19:17:30.190252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.832 [2024-07-12 19:17:30.190259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.832 [2024-07-12 19:17:30.190266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.832 [2024-07-12 19:17:30.190278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.832 [2024-07-12 19:17:30.190285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.832 [2024-07-12 19:17:30.190292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.832 [2024-07-12 19:17:30.190299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:27.832 [2024-07-12 19:17:30.190827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18baa80 (9): Bad file descriptor 00:25:27.832 [2024-07-12 19:17:30.191838] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:27.832 [2024-07-12 19:17:30.191848] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:27.832 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:29.205 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:29.205 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.205 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:29.205 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.205 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:29.205 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:29.205 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:29.205 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.205 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:29.205 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:29.772 [2024-07-12 19:17:32.241701] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:29.772 [2024-07-12 19:17:32.241723] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:29.772 [2024-07-12 19:17:32.241736] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:30.031 [2024-07-12 19:17:32.370129] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:30.031 [2024-07-12 19:17:32.432398] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:30.031 [2024-07-12 19:17:32.432433] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:30.031 [2024-07-12 19:17:32.432450] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:30.031 [2024-07-12 19:17:32.432464] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:30.031 [2024-07-12 19:17:32.432471] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.031 [2024-07-12 19:17:32.480870] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18c08a0 was disconnected and freed. delete nvme_qpair. 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 423172 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 423172 ']' 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 423172 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 423172 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 423172' 00:25:30.031 killing process with pid 423172 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 423172 00:25:30.031 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 423172 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:30.290 rmmod nvme_tcp 00:25:30.290 rmmod nvme_fabrics 00:25:30.290 rmmod nvme_keyring 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 423082 ']' 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 423082 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 423082 ']' 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 423082 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 423082 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 423082' 00:25:30.290 killing process with pid 423082 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 423082 00:25:30.290 19:17:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 423082 00:25:30.548 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:30.548 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:30.548 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:30.548 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:30.548 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:30.548 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.548 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.548 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.101 19:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:33.101 00:25:33.101 real 0m21.404s 00:25:33.101 user 0m26.850s 00:25:33.101 sys 0m5.516s 00:25:33.101 19:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:33.101 19:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.101 ************************************ 00:25:33.101 END TEST nvmf_discovery_remove_ifc 00:25:33.101 ************************************ 00:25:33.101 19:17:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:33.101 19:17:35 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:33.101 19:17:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:33.101 19:17:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:33.101 19:17:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:33.101 ************************************ 00:25:33.101 START TEST nvmf_identify_kernel_target 00:25:33.101 ************************************ 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:33.101 * Looking for test storage... 00:25:33.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:33.101 19:17:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:38.377 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:38.377 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:38.377 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:38.378 Found net devices under 0000:86:00.0: cvl_0_0 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:38.378 Found net devices under 0000:86:00.1: cvl_0_1 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:38.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:25:38.378 00:25:38.378 --- 10.0.0.2 ping statistics --- 00:25:38.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.378 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:25:38.378 00:25:38.378 --- 10.0.0.1 ping statistics --- 00:25:38.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.378 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:38.378 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:38.638 19:17:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:41.176 Waiting for block devices as requested 00:25:41.176 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:41.434 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:41.434 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:41.434 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:41.693 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:41.693 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:41.693 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:41.693 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:41.953 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:41.953 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:41.953 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:42.213 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:42.213 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:42.213 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:42.473 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:42.473 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:42.473 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:42.473 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:42.473 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:42.473 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:42.473 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:42.473 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:42.473 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:42.473 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:42.473 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:42.473 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:42.733 No valid GPT data, bailing 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:42.733 00:25:42.733 Discovery Log Number of Records 2, Generation counter 2 00:25:42.733 =====Discovery Log Entry 0====== 00:25:42.733 trtype: tcp 00:25:42.733 adrfam: ipv4 00:25:42.733 subtype: current discovery subsystem 00:25:42.733 treq: not specified, sq flow control disable supported 00:25:42.733 portid: 1 00:25:42.733 trsvcid: 4420 00:25:42.733 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:42.733 traddr: 10.0.0.1 00:25:42.733 eflags: none 00:25:42.733 sectype: none 00:25:42.733 =====Discovery Log Entry 1====== 00:25:42.733 trtype: tcp 00:25:42.733 adrfam: ipv4 00:25:42.733 subtype: nvme subsystem 00:25:42.733 treq: not specified, sq flow control disable supported 00:25:42.733 portid: 1 00:25:42.733 trsvcid: 4420 00:25:42.733 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:42.733 traddr: 10.0.0.1 00:25:42.733 eflags: none 00:25:42.733 sectype: none 00:25:42.733 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:42.733 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:42.733 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.733 ===================================================== 00:25:42.733 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:42.733 ===================================================== 00:25:42.733 Controller Capabilities/Features 00:25:42.733 ================================ 00:25:42.733 Vendor ID: 0000 00:25:42.733 Subsystem Vendor ID: 0000 00:25:42.733 Serial Number: 56a2cd7e5b36bb06a93b 00:25:42.733 Model Number: Linux 00:25:42.733 Firmware Version: 6.7.0-68 00:25:42.733 Recommended Arb Burst: 0 00:25:42.733 IEEE OUI Identifier: 00 00 00 00:25:42.733 Multi-path I/O 00:25:42.733 May have multiple subsystem ports: No 00:25:42.733 May have multiple controllers: No 00:25:42.733 Associated with SR-IOV VF: No 00:25:42.733 Max Data Transfer Size: Unlimited 00:25:42.733 Max Number of Namespaces: 0 00:25:42.733 Max Number of I/O Queues: 1024 00:25:42.733 NVMe Specification Version (VS): 1.3 00:25:42.733 NVMe Specification Version (Identify): 1.3 00:25:42.733 Maximum Queue Entries: 1024 00:25:42.733 Contiguous Queues Required: No 00:25:42.733 Arbitration Mechanisms Supported 00:25:42.733 Weighted Round Robin: Not Supported 00:25:42.733 Vendor Specific: Not Supported 00:25:42.733 Reset Timeout: 7500 ms 00:25:42.733 Doorbell Stride: 4 bytes 00:25:42.733 NVM Subsystem Reset: Not Supported 00:25:42.733 Command Sets Supported 00:25:42.733 NVM Command Set: Supported 00:25:42.733 Boot Partition: Not Supported 00:25:42.733 Memory Page Size Minimum: 4096 bytes 00:25:42.733 Memory Page Size Maximum: 4096 bytes 00:25:42.733 Persistent Memory Region: Not Supported 00:25:42.733 Optional Asynchronous Events Supported 00:25:42.733 Namespace Attribute Notices: Not Supported 00:25:42.733 Firmware Activation Notices: Not Supported 00:25:42.733 ANA Change Notices: Not Supported 00:25:42.734 PLE Aggregate Log Change Notices: Not Supported 00:25:42.734 LBA Status Info Alert Notices: Not Supported 00:25:42.734 EGE Aggregate Log Change Notices: Not Supported 00:25:42.734 Normal NVM Subsystem Shutdown event: Not Supported 00:25:42.734 Zone Descriptor Change Notices: Not Supported 00:25:42.734 Discovery Log Change Notices: Supported 00:25:42.734 Controller Attributes 00:25:42.734 128-bit Host Identifier: Not Supported 00:25:42.734 Non-Operational Permissive Mode: Not Supported 00:25:42.734 NVM Sets: Not Supported 00:25:42.734 Read Recovery Levels: Not Supported 00:25:42.734 Endurance Groups: Not Supported 00:25:42.734 Predictable Latency Mode: Not Supported 00:25:42.734 Traffic Based Keep ALive: Not Supported 00:25:42.734 Namespace Granularity: Not Supported 00:25:42.734 SQ Associations: Not Supported 00:25:42.734 UUID List: Not Supported 00:25:42.734 Multi-Domain Subsystem: Not Supported 00:25:42.734 Fixed Capacity Management: Not Supported 00:25:42.734 Variable Capacity Management: Not Supported 00:25:42.734 Delete Endurance Group: Not Supported 00:25:42.734 Delete NVM Set: Not Supported 00:25:42.734 Extended LBA Formats Supported: Not Supported 00:25:42.734 Flexible Data Placement Supported: Not Supported 00:25:42.734 00:25:42.734 Controller Memory Buffer Support 00:25:42.734 ================================ 00:25:42.734 Supported: No 00:25:42.734 00:25:42.734 Persistent Memory Region Support 00:25:42.734 ================================ 00:25:42.734 Supported: No 00:25:42.734 00:25:42.734 Admin Command Set Attributes 00:25:42.734 ============================ 00:25:42.734 Security Send/Receive: Not Supported 00:25:42.734 Format NVM: Not Supported 00:25:42.734 Firmware Activate/Download: Not Supported 00:25:42.734 Namespace Management: Not Supported 00:25:42.734 Device Self-Test: Not Supported 00:25:42.734 Directives: Not Supported 00:25:42.734 NVMe-MI: Not Supported 00:25:42.734 Virtualization Management: Not Supported 00:25:42.734 Doorbell Buffer Config: Not Supported 00:25:42.734 Get LBA Status Capability: Not Supported 00:25:42.734 Command & Feature Lockdown Capability: Not Supported 00:25:42.734 Abort Command Limit: 1 00:25:42.734 Async Event Request Limit: 1 00:25:42.734 Number of Firmware Slots: N/A 00:25:42.734 Firmware Slot 1 Read-Only: N/A 00:25:42.734 Firmware Activation Without Reset: N/A 00:25:42.734 Multiple Update Detection Support: N/A 00:25:42.734 Firmware Update Granularity: No Information Provided 00:25:42.734 Per-Namespace SMART Log: No 00:25:42.734 Asymmetric Namespace Access Log Page: Not Supported 00:25:42.734 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:42.734 Command Effects Log Page: Not Supported 00:25:42.734 Get Log Page Extended Data: Supported 00:25:42.734 Telemetry Log Pages: Not Supported 00:25:42.734 Persistent Event Log Pages: Not Supported 00:25:42.734 Supported Log Pages Log Page: May Support 00:25:42.734 Commands Supported & Effects Log Page: Not Supported 00:25:42.734 Feature Identifiers & Effects Log Page:May Support 00:25:42.734 NVMe-MI Commands & Effects Log Page: May Support 00:25:42.734 Data Area 4 for Telemetry Log: Not Supported 00:25:42.734 Error Log Page Entries Supported: 1 00:25:42.734 Keep Alive: Not Supported 00:25:42.734 00:25:42.734 NVM Command Set Attributes 00:25:42.734 ========================== 00:25:42.734 Submission Queue Entry Size 00:25:42.734 Max: 1 00:25:42.734 Min: 1 00:25:42.734 Completion Queue Entry Size 00:25:42.734 Max: 1 00:25:42.734 Min: 1 00:25:42.734 Number of Namespaces: 0 00:25:42.734 Compare Command: Not Supported 00:25:42.734 Write Uncorrectable Command: Not Supported 00:25:42.734 Dataset Management Command: Not Supported 00:25:42.734 Write Zeroes Command: Not Supported 00:25:42.734 Set Features Save Field: Not Supported 00:25:42.734 Reservations: Not Supported 00:25:42.734 Timestamp: Not Supported 00:25:42.734 Copy: Not Supported 00:25:42.734 Volatile Write Cache: Not Present 00:25:42.734 Atomic Write Unit (Normal): 1 00:25:42.734 Atomic Write Unit (PFail): 1 00:25:42.734 Atomic Compare & Write Unit: 1 00:25:42.734 Fused Compare & Write: Not Supported 00:25:42.734 Scatter-Gather List 00:25:42.734 SGL Command Set: Supported 00:25:42.734 SGL Keyed: Not Supported 00:25:42.734 SGL Bit Bucket Descriptor: Not Supported 00:25:42.734 SGL Metadata Pointer: Not Supported 00:25:42.734 Oversized SGL: Not Supported 00:25:42.734 SGL Metadata Address: Not Supported 00:25:42.734 SGL Offset: Supported 00:25:42.734 Transport SGL Data Block: Not Supported 00:25:42.734 Replay Protected Memory Block: Not Supported 00:25:42.734 00:25:42.734 Firmware Slot Information 00:25:42.734 ========================= 00:25:42.734 Active slot: 0 00:25:42.734 00:25:42.734 00:25:42.734 Error Log 00:25:42.734 ========= 00:25:42.734 00:25:42.734 Active Namespaces 00:25:42.734 ================= 00:25:42.734 Discovery Log Page 00:25:42.734 ================== 00:25:42.734 Generation Counter: 2 00:25:42.734 Number of Records: 2 00:25:42.734 Record Format: 0 00:25:42.734 00:25:42.734 Discovery Log Entry 0 00:25:42.734 ---------------------- 00:25:42.734 Transport Type: 3 (TCP) 00:25:42.734 Address Family: 1 (IPv4) 00:25:42.734 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:42.734 Entry Flags: 00:25:42.734 Duplicate Returned Information: 0 00:25:42.734 Explicit Persistent Connection Support for Discovery: 0 00:25:42.734 Transport Requirements: 00:25:42.734 Secure Channel: Not Specified 00:25:42.734 Port ID: 1 (0x0001) 00:25:42.734 Controller ID: 65535 (0xffff) 00:25:42.734 Admin Max SQ Size: 32 00:25:42.734 Transport Service Identifier: 4420 00:25:42.734 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:42.734 Transport Address: 10.0.0.1 00:25:42.734 Discovery Log Entry 1 00:25:42.734 ---------------------- 00:25:42.734 Transport Type: 3 (TCP) 00:25:42.734 Address Family: 1 (IPv4) 00:25:42.734 Subsystem Type: 2 (NVM Subsystem) 00:25:42.734 Entry Flags: 00:25:42.734 Duplicate Returned Information: 0 00:25:42.734 Explicit Persistent Connection Support for Discovery: 0 00:25:42.734 Transport Requirements: 00:25:42.734 Secure Channel: Not Specified 00:25:42.734 Port ID: 1 (0x0001) 00:25:42.734 Controller ID: 65535 (0xffff) 00:25:42.734 Admin Max SQ Size: 32 00:25:42.734 Transport Service Identifier: 4420 00:25:42.734 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:42.734 Transport Address: 10.0.0.1 00:25:42.734 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:42.994 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.994 get_feature(0x01) failed 00:25:42.994 get_feature(0x02) failed 00:25:42.994 get_feature(0x04) failed 00:25:42.994 ===================================================== 00:25:42.994 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:42.994 ===================================================== 00:25:42.994 Controller Capabilities/Features 00:25:42.994 ================================ 00:25:42.994 Vendor ID: 0000 00:25:42.994 Subsystem Vendor ID: 0000 00:25:42.994 Serial Number: 01168015411be80da80e 00:25:42.994 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:42.994 Firmware Version: 6.7.0-68 00:25:42.994 Recommended Arb Burst: 6 00:25:42.994 IEEE OUI Identifier: 00 00 00 00:25:42.994 Multi-path I/O 00:25:42.994 May have multiple subsystem ports: Yes 00:25:42.994 May have multiple controllers: Yes 00:25:42.994 Associated with SR-IOV VF: No 00:25:42.994 Max Data Transfer Size: Unlimited 00:25:42.994 Max Number of Namespaces: 1024 00:25:42.994 Max Number of I/O Queues: 128 00:25:42.994 NVMe Specification Version (VS): 1.3 00:25:42.994 NVMe Specification Version (Identify): 1.3 00:25:42.994 Maximum Queue Entries: 1024 00:25:42.994 Contiguous Queues Required: No 00:25:42.994 Arbitration Mechanisms Supported 00:25:42.994 Weighted Round Robin: Not Supported 00:25:42.994 Vendor Specific: Not Supported 00:25:42.994 Reset Timeout: 7500 ms 00:25:42.994 Doorbell Stride: 4 bytes 00:25:42.994 NVM Subsystem Reset: Not Supported 00:25:42.994 Command Sets Supported 00:25:42.994 NVM Command Set: Supported 00:25:42.994 Boot Partition: Not Supported 00:25:42.994 Memory Page Size Minimum: 4096 bytes 00:25:42.994 Memory Page Size Maximum: 4096 bytes 00:25:42.994 Persistent Memory Region: Not Supported 00:25:42.994 Optional Asynchronous Events Supported 00:25:42.994 Namespace Attribute Notices: Supported 00:25:42.994 Firmware Activation Notices: Not Supported 00:25:42.994 ANA Change Notices: Supported 00:25:42.994 PLE Aggregate Log Change Notices: Not Supported 00:25:42.994 LBA Status Info Alert Notices: Not Supported 00:25:42.994 EGE Aggregate Log Change Notices: Not Supported 00:25:42.994 Normal NVM Subsystem Shutdown event: Not Supported 00:25:42.994 Zone Descriptor Change Notices: Not Supported 00:25:42.994 Discovery Log Change Notices: Not Supported 00:25:42.994 Controller Attributes 00:25:42.994 128-bit Host Identifier: Supported 00:25:42.994 Non-Operational Permissive Mode: Not Supported 00:25:42.994 NVM Sets: Not Supported 00:25:42.994 Read Recovery Levels: Not Supported 00:25:42.994 Endurance Groups: Not Supported 00:25:42.994 Predictable Latency Mode: Not Supported 00:25:42.994 Traffic Based Keep ALive: Supported 00:25:42.994 Namespace Granularity: Not Supported 00:25:42.994 SQ Associations: Not Supported 00:25:42.994 UUID List: Not Supported 00:25:42.994 Multi-Domain Subsystem: Not Supported 00:25:42.994 Fixed Capacity Management: Not Supported 00:25:42.994 Variable Capacity Management: Not Supported 00:25:42.994 Delete Endurance Group: Not Supported 00:25:42.994 Delete NVM Set: Not Supported 00:25:42.994 Extended LBA Formats Supported: Not Supported 00:25:42.995 Flexible Data Placement Supported: Not Supported 00:25:42.995 00:25:42.995 Controller Memory Buffer Support 00:25:42.995 ================================ 00:25:42.995 Supported: No 00:25:42.995 00:25:42.995 Persistent Memory Region Support 00:25:42.995 ================================ 00:25:42.995 Supported: No 00:25:42.995 00:25:42.995 Admin Command Set Attributes 00:25:42.995 ============================ 00:25:42.995 Security Send/Receive: Not Supported 00:25:42.995 Format NVM: Not Supported 00:25:42.995 Firmware Activate/Download: Not Supported 00:25:42.995 Namespace Management: Not Supported 00:25:42.995 Device Self-Test: Not Supported 00:25:42.995 Directives: Not Supported 00:25:42.995 NVMe-MI: Not Supported 00:25:42.995 Virtualization Management: Not Supported 00:25:42.995 Doorbell Buffer Config: Not Supported 00:25:42.995 Get LBA Status Capability: Not Supported 00:25:42.995 Command & Feature Lockdown Capability: Not Supported 00:25:42.995 Abort Command Limit: 4 00:25:42.995 Async Event Request Limit: 4 00:25:42.995 Number of Firmware Slots: N/A 00:25:42.995 Firmware Slot 1 Read-Only: N/A 00:25:42.995 Firmware Activation Without Reset: N/A 00:25:42.995 Multiple Update Detection Support: N/A 00:25:42.995 Firmware Update Granularity: No Information Provided 00:25:42.995 Per-Namespace SMART Log: Yes 00:25:42.995 Asymmetric Namespace Access Log Page: Supported 00:25:42.995 ANA Transition Time : 10 sec 00:25:42.995 00:25:42.995 Asymmetric Namespace Access Capabilities 00:25:42.995 ANA Optimized State : Supported 00:25:42.995 ANA Non-Optimized State : Supported 00:25:42.995 ANA Inaccessible State : Supported 00:25:42.995 ANA Persistent Loss State : Supported 00:25:42.995 ANA Change State : Supported 00:25:42.995 ANAGRPID is not changed : No 00:25:42.995 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:42.995 00:25:42.995 ANA Group Identifier Maximum : 128 00:25:42.995 Number of ANA Group Identifiers : 128 00:25:42.995 Max Number of Allowed Namespaces : 1024 00:25:42.995 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:42.995 Command Effects Log Page: Supported 00:25:42.995 Get Log Page Extended Data: Supported 00:25:42.995 Telemetry Log Pages: Not Supported 00:25:42.995 Persistent Event Log Pages: Not Supported 00:25:42.995 Supported Log Pages Log Page: May Support 00:25:42.995 Commands Supported & Effects Log Page: Not Supported 00:25:42.995 Feature Identifiers & Effects Log Page:May Support 00:25:42.995 NVMe-MI Commands & Effects Log Page: May Support 00:25:42.995 Data Area 4 for Telemetry Log: Not Supported 00:25:42.995 Error Log Page Entries Supported: 128 00:25:42.995 Keep Alive: Supported 00:25:42.995 Keep Alive Granularity: 1000 ms 00:25:42.995 00:25:42.995 NVM Command Set Attributes 00:25:42.995 ========================== 00:25:42.995 Submission Queue Entry Size 00:25:42.995 Max: 64 00:25:42.995 Min: 64 00:25:42.995 Completion Queue Entry Size 00:25:42.995 Max: 16 00:25:42.995 Min: 16 00:25:42.995 Number of Namespaces: 1024 00:25:42.995 Compare Command: Not Supported 00:25:42.995 Write Uncorrectable Command: Not Supported 00:25:42.995 Dataset Management Command: Supported 00:25:42.995 Write Zeroes Command: Supported 00:25:42.995 Set Features Save Field: Not Supported 00:25:42.995 Reservations: Not Supported 00:25:42.995 Timestamp: Not Supported 00:25:42.995 Copy: Not Supported 00:25:42.995 Volatile Write Cache: Present 00:25:42.995 Atomic Write Unit (Normal): 1 00:25:42.995 Atomic Write Unit (PFail): 1 00:25:42.995 Atomic Compare & Write Unit: 1 00:25:42.995 Fused Compare & Write: Not Supported 00:25:42.995 Scatter-Gather List 00:25:42.995 SGL Command Set: Supported 00:25:42.995 SGL Keyed: Not Supported 00:25:42.995 SGL Bit Bucket Descriptor: Not Supported 00:25:42.995 SGL Metadata Pointer: Not Supported 00:25:42.995 Oversized SGL: Not Supported 00:25:42.995 SGL Metadata Address: Not Supported 00:25:42.995 SGL Offset: Supported 00:25:42.995 Transport SGL Data Block: Not Supported 00:25:42.995 Replay Protected Memory Block: Not Supported 00:25:42.995 00:25:42.995 Firmware Slot Information 00:25:42.995 ========================= 00:25:42.995 Active slot: 0 00:25:42.995 00:25:42.995 Asymmetric Namespace Access 00:25:42.995 =========================== 00:25:42.995 Change Count : 0 00:25:42.995 Number of ANA Group Descriptors : 1 00:25:42.995 ANA Group Descriptor : 0 00:25:42.995 ANA Group ID : 1 00:25:42.995 Number of NSID Values : 1 00:25:42.995 Change Count : 0 00:25:42.995 ANA State : 1 00:25:42.995 Namespace Identifier : 1 00:25:42.995 00:25:42.995 Commands Supported and Effects 00:25:42.995 ============================== 00:25:42.995 Admin Commands 00:25:42.995 -------------- 00:25:42.995 Get Log Page (02h): Supported 00:25:42.995 Identify (06h): Supported 00:25:42.995 Abort (08h): Supported 00:25:42.995 Set Features (09h): Supported 00:25:42.995 Get Features (0Ah): Supported 00:25:42.995 Asynchronous Event Request (0Ch): Supported 00:25:42.995 Keep Alive (18h): Supported 00:25:42.995 I/O Commands 00:25:42.995 ------------ 00:25:42.995 Flush (00h): Supported 00:25:42.995 Write (01h): Supported LBA-Change 00:25:42.995 Read (02h): Supported 00:25:42.995 Write Zeroes (08h): Supported LBA-Change 00:25:42.995 Dataset Management (09h): Supported 00:25:42.995 00:25:42.995 Error Log 00:25:42.995 ========= 00:25:42.995 Entry: 0 00:25:42.995 Error Count: 0x3 00:25:42.995 Submission Queue Id: 0x0 00:25:42.995 Command Id: 0x5 00:25:42.995 Phase Bit: 0 00:25:42.995 Status Code: 0x2 00:25:42.995 Status Code Type: 0x0 00:25:42.995 Do Not Retry: 1 00:25:42.995 Error Location: 0x28 00:25:42.995 LBA: 0x0 00:25:42.995 Namespace: 0x0 00:25:42.995 Vendor Log Page: 0x0 00:25:42.995 ----------- 00:25:42.995 Entry: 1 00:25:42.995 Error Count: 0x2 00:25:42.995 Submission Queue Id: 0x0 00:25:42.995 Command Id: 0x5 00:25:42.995 Phase Bit: 0 00:25:42.995 Status Code: 0x2 00:25:42.995 Status Code Type: 0x0 00:25:42.995 Do Not Retry: 1 00:25:42.995 Error Location: 0x28 00:25:42.995 LBA: 0x0 00:25:42.995 Namespace: 0x0 00:25:42.995 Vendor Log Page: 0x0 00:25:42.995 ----------- 00:25:42.995 Entry: 2 00:25:42.995 Error Count: 0x1 00:25:42.995 Submission Queue Id: 0x0 00:25:42.995 Command Id: 0x4 00:25:42.995 Phase Bit: 0 00:25:42.995 Status Code: 0x2 00:25:42.995 Status Code Type: 0x0 00:25:42.995 Do Not Retry: 1 00:25:42.995 Error Location: 0x28 00:25:42.995 LBA: 0x0 00:25:42.995 Namespace: 0x0 00:25:42.995 Vendor Log Page: 0x0 00:25:42.995 00:25:42.995 Number of Queues 00:25:42.995 ================ 00:25:42.995 Number of I/O Submission Queues: 128 00:25:42.995 Number of I/O Completion Queues: 128 00:25:42.995 00:25:42.995 ZNS Specific Controller Data 00:25:42.995 ============================ 00:25:42.995 Zone Append Size Limit: 0 00:25:42.995 00:25:42.995 00:25:42.995 Active Namespaces 00:25:42.995 ================= 00:25:42.995 get_feature(0x05) failed 00:25:42.995 Namespace ID:1 00:25:42.995 Command Set Identifier: NVM (00h) 00:25:42.995 Deallocate: Supported 00:25:42.995 Deallocated/Unwritten Error: Not Supported 00:25:42.995 Deallocated Read Value: Unknown 00:25:42.995 Deallocate in Write Zeroes: Not Supported 00:25:42.996 Deallocated Guard Field: 0xFFFF 00:25:42.996 Flush: Supported 00:25:42.996 Reservation: Not Supported 00:25:42.996 Namespace Sharing Capabilities: Multiple Controllers 00:25:42.996 Size (in LBAs): 1953525168 (931GiB) 00:25:42.996 Capacity (in LBAs): 1953525168 (931GiB) 00:25:42.996 Utilization (in LBAs): 1953525168 (931GiB) 00:25:42.996 UUID: d99f99e9-0f98-4d97-a519-0169d276b6d9 00:25:42.996 Thin Provisioning: Not Supported 00:25:42.996 Per-NS Atomic Units: Yes 00:25:42.996 Atomic Boundary Size (Normal): 0 00:25:42.996 Atomic Boundary Size (PFail): 0 00:25:42.996 Atomic Boundary Offset: 0 00:25:42.996 NGUID/EUI64 Never Reused: No 00:25:42.996 ANA group ID: 1 00:25:42.996 Namespace Write Protected: No 00:25:42.996 Number of LBA Formats: 1 00:25:42.996 Current LBA Format: LBA Format #00 00:25:42.996 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:42.996 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.996 rmmod nvme_tcp 00:25:42.996 rmmod nvme_fabrics 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.996 19:17:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.923 19:17:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.923 19:17:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:44.923 19:17:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:44.923 19:17:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:45.198 19:17:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:45.198 19:17:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:45.198 19:17:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:45.198 19:17:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:45.198 19:17:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:45.198 19:17:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:45.198 19:17:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:47.837 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:47.837 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:47.837 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:47.837 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:47.837 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:47.837 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:47.837 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:47.837 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:47.837 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:47.837 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:47.837 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:47.837 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:47.837 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:47.837 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:48.110 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:48.110 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:48.804 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:48.804 00:25:48.804 real 0m16.182s 00:25:48.804 user 0m4.098s 00:25:48.804 sys 0m8.400s 00:25:48.804 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:48.804 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.804 ************************************ 00:25:48.804 END TEST nvmf_identify_kernel_target 00:25:48.804 ************************************ 00:25:49.082 19:17:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:49.082 19:17:51 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:49.082 19:17:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:49.082 19:17:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:49.082 19:17:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:49.082 ************************************ 00:25:49.082 START TEST nvmf_auth_host 00:25:49.082 ************************************ 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:49.082 * Looking for test storage... 00:25:49.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.082 19:17:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:49.083 19:17:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.498 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:54.499 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:54.499 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:54.499 Found net devices under 0000:86:00.0: cvl_0_0 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:54.499 Found net devices under 0000:86:00.1: cvl_0_1 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:54.499 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:54.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:25:54.759 00:25:54.759 --- 10.0.0.2 ping statistics --- 00:25:54.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.759 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:25:54.759 00:25:54.759 --- 10.0.0.1 ping statistics --- 00:25:54.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.759 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=434978 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 434978 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 434978 ']' 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:54.759 19:17:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c037f9e90bc59983d616323a34f6dfaa 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1zC 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c037f9e90bc59983d616323a34f6dfaa 0 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c037f9e90bc59983d616323a34f6dfaa 0 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c037f9e90bc59983d616323a34f6dfaa 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1zC 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1zC 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.1zC 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aff3e3f2f5c9b821ccd03e7dee6af6b12c5051d3ab52255c27826de2a52622d4 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.K7k 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aff3e3f2f5c9b821ccd03e7dee6af6b12c5051d3ab52255c27826de2a52622d4 3 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aff3e3f2f5c9b821ccd03e7dee6af6b12c5051d3ab52255c27826de2a52622d4 3 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aff3e3f2f5c9b821ccd03e7dee6af6b12c5051d3ab52255c27826de2a52622d4 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.K7k 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.K7k 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.K7k 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6f13fdbc0fc93629b2bc3900e25beac7865bf7339d2e5f13 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xMA 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6f13fdbc0fc93629b2bc3900e25beac7865bf7339d2e5f13 0 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6f13fdbc0fc93629b2bc3900e25beac7865bf7339d2e5f13 0 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6f13fdbc0fc93629b2bc3900e25beac7865bf7339d2e5f13 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:55.696 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xMA 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xMA 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.xMA 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=93ff44d12f28e619e1f7ede5d86ea5d56de02e41911c942b 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7bh 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 93ff44d12f28e619e1f7ede5d86ea5d56de02e41911c942b 2 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 93ff44d12f28e619e1f7ede5d86ea5d56de02e41911c942b 2 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=93ff44d12f28e619e1f7ede5d86ea5d56de02e41911c942b 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7bh 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7bh 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.7bh 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6488dfc0628c88954ad8670e66fbd86b 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.60Y 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6488dfc0628c88954ad8670e66fbd86b 1 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6488dfc0628c88954ad8670e66fbd86b 1 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6488dfc0628c88954ad8670e66fbd86b 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.60Y 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.60Y 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.60Y 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f8132f0a321604f08e681ead15d7e368 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8ra 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f8132f0a321604f08e681ead15d7e368 1 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f8132f0a321604f08e681ead15d7e368 1 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f8132f0a321604f08e681ead15d7e368 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8ra 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8ra 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.8ra 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:55.956 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:55.957 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ebbfb5095b2e041ac72dc411be23a2407f4b27a4c9c6378f 00:25:55.957 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:55.957 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.nEB 00:25:55.957 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ebbfb5095b2e041ac72dc411be23a2407f4b27a4c9c6378f 2 00:25:55.957 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ebbfb5095b2e041ac72dc411be23a2407f4b27a4c9c6378f 2 00:25:55.957 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:55.957 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:55.957 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ebbfb5095b2e041ac72dc411be23a2407f4b27a4c9c6378f 00:25:55.957 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:55.957 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.nEB 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.nEB 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.nEB 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=89e4695b367e5262c1d6ed4fbfc75632 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.z1T 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 89e4695b367e5262c1d6ed4fbfc75632 0 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 89e4695b367e5262c1d6ed4fbfc75632 0 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.215 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=89e4695b367e5262c1d6ed4fbfc75632 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.z1T 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.z1T 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.z1T 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9056125dfb0e3714ba6443a2cc12a88b2e4ae9405e8c30335cb582ed00f1ec7f 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Pq1 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9056125dfb0e3714ba6443a2cc12a88b2e4ae9405e8c30335cb582ed00f1ec7f 3 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9056125dfb0e3714ba6443a2cc12a88b2e4ae9405e8c30335cb582ed00f1ec7f 3 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9056125dfb0e3714ba6443a2cc12a88b2e4ae9405e8c30335cb582ed00f1ec7f 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Pq1 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Pq1 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Pq1 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 434978 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 434978 ']' 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:56.216 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.474 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:56.474 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:56.474 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.474 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1zC 00:25:56.474 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.474 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.474 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.474 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.K7k ]] 00:25:56.474 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.K7k 00:25:56.474 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.474 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.474 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.474 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.474 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.xMA 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.7bh ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7bh 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.60Y 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.8ra ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8ra 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.nEB 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.z1T ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.z1T 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Pq1 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:56.475 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:59.006 Waiting for block devices as requested 00:25:59.006 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:59.265 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:59.265 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:59.265 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:59.524 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:59.524 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:59.524 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:59.783 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:59.783 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:59.783 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:59.783 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:00.041 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:00.041 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:00.041 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:00.041 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:00.300 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:00.300 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:00.866 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:00.866 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:00.866 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:00.866 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:00.866 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:00.866 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:00.866 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:00.866 19:18:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:00.866 19:18:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:00.866 No valid GPT data, bailing 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:00.867 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:01.126 00:26:01.126 Discovery Log Number of Records 2, Generation counter 2 00:26:01.126 =====Discovery Log Entry 0====== 00:26:01.126 trtype: tcp 00:26:01.126 adrfam: ipv4 00:26:01.126 subtype: current discovery subsystem 00:26:01.126 treq: not specified, sq flow control disable supported 00:26:01.126 portid: 1 00:26:01.126 trsvcid: 4420 00:26:01.126 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:01.126 traddr: 10.0.0.1 00:26:01.126 eflags: none 00:26:01.126 sectype: none 00:26:01.126 =====Discovery Log Entry 1====== 00:26:01.126 trtype: tcp 00:26:01.126 adrfam: ipv4 00:26:01.126 subtype: nvme subsystem 00:26:01.126 treq: not specified, sq flow control disable supported 00:26:01.126 portid: 1 00:26:01.126 trsvcid: 4420 00:26:01.126 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:01.126 traddr: 10.0.0.1 00:26:01.126 eflags: none 00:26:01.126 sectype: none 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.126 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.127 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.388 nvme0n1 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.388 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.389 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.648 nvme0n1 00:26:01.648 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.648 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.648 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.648 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.648 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.648 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.648 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.648 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.648 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.648 nvme0n1 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.648 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.907 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.908 nvme0n1 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.908 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.168 nvme0n1 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.168 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.427 nvme0n1 00:26:02.427 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.427 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.427 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.427 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.428 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.687 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:02.687 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:02.687 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:02.687 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:02.687 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.687 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.687 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.687 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:02.687 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.687 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.687 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.687 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.688 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.948 nvme0n1 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.948 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.207 nvme0n1 00:26:03.207 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.208 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.467 nvme0n1 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.467 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.727 nvme0n1 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.727 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.987 nvme0n1 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.987 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.556 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.816 nvme0n1 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.816 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.076 nvme0n1 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.076 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.335 nvme0n1 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.335 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.336 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.336 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.336 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.336 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.336 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.336 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.336 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.336 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.336 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.336 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.336 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.595 nvme0n1 00:26:05.595 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.595 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.595 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.595 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.595 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.595 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.595 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.595 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.595 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.595 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.855 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.114 nvme0n1 00:26:06.114 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.114 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.114 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.114 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.114 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.114 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.114 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.114 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.114 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.115 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.115 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.115 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.115 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.115 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:06.115 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.115 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.115 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.115 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:06.115 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:06.115 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:06.115 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.115 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.493 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.752 nvme0n1 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.752 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.012 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.272 nvme0n1 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.272 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.840 nvme0n1 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.840 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.098 nvme0n1 00:26:09.098 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.098 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.098 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.098 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.098 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.098 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.098 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.098 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.098 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.098 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.358 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.617 nvme0n1 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.617 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.618 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.618 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.618 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.618 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.618 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.618 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.618 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.618 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.186 nvme0n1 00:26:10.187 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.187 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.187 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.187 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.187 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.187 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.187 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.187 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.187 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.187 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.445 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.011 nvme0n1 00:26:11.011 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.011 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.011 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.011 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.011 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.011 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.011 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.011 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.011 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.011 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.012 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.579 nvme0n1 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.579 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.580 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.148 nvme0n1 00:26:12.148 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.148 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.148 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.148 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.148 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.148 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.406 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.407 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.974 nvme0n1 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.974 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.233 nvme0n1 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.233 nvme0n1 00:26:13.233 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.493 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.493 nvme0n1 00:26:13.493 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.493 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.493 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.493 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.493 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.493 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.493 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.752 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.752 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.752 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.752 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.752 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.752 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:13.752 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.752 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.752 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.752 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:13.752 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:13.752 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:13.752 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.753 nvme0n1 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.753 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.012 nvme0n1 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.012 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.013 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.272 nvme0n1 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.272 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.532 nvme0n1 00:26:14.532 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.532 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.532 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.532 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.532 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.532 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.791 nvme0n1 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.791 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.050 nvme0n1 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.050 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.051 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.051 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.051 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.051 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.051 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.051 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.051 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.051 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.051 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.051 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.051 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.051 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.310 nvme0n1 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.310 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.311 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.311 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.311 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.570 nvme0n1 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:15.570 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.828 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.087 nvme0n1 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.087 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.345 nvme0n1 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.345 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.604 nvme0n1 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.604 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.862 nvme0n1 00:26:16.862 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.862 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.862 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.862 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.862 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.862 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.862 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.862 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.862 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.862 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.121 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.380 nvme0n1 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.380 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.949 nvme0n1 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.949 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.208 nvme0n1 00:26:18.208 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.208 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.208 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.208 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.208 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.466 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.725 nvme0n1 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.725 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.984 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.243 nvme0n1 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.243 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.811 nvme0n1 00:26:19.811 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.811 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.811 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.811 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.811 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.811 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.811 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.811 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.811 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.811 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.070 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.070 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.070 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.071 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.639 nvme0n1 00:26:20.639 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.639 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.639 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.639 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.639 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.639 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.639 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.207 nvme0n1 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.207 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.775 nvme0n1 00:26:21.775 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.775 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.775 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.775 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.775 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.775 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.775 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.775 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.775 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.775 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.034 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.602 nvme0n1 00:26:22.602 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.602 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.602 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.602 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.602 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.602 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.602 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.861 nvme0n1 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.861 nvme0n1 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.861 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.862 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.862 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.862 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.862 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.120 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.121 nvme0n1 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.121 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:23.379 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.379 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.379 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.379 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.379 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.379 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.379 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.379 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.379 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.379 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.379 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.379 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.379 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.379 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.380 nvme0n1 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.380 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.639 nvme0n1 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.639 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.898 nvme0n1 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.898 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.158 nvme0n1 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.158 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.417 nvme0n1 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.417 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.418 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.418 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.418 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.418 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.418 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.676 nvme0n1 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.676 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.677 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.936 nvme0n1 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.936 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.195 nvme0n1 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.195 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.454 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.454 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.454 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.454 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.454 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.454 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.454 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.454 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.454 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.454 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.454 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.454 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:25.454 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.454 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.454 nvme0n1 00:26:25.454 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.454 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.454 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.454 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.454 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.714 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 nvme0n1 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:25.973 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.974 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.233 nvme0n1 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.233 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.492 nvme0n1 00:26:26.492 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.492 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.492 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.492 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.492 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.492 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.492 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.492 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.492 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.492 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.492 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.492 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:26.492 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.492 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:26.492 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.492 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.492 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.493 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.062 nvme0n1 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.062 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.321 nvme0n1 00:26:27.321 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.321 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.321 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.321 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.321 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:27.581 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.582 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.841 nvme0n1 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:27.841 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.842 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.101 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:28.101 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.101 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.361 nvme0n1 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.361 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.930 nvme0n1 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzAzN2Y5ZTkwYmM1OTk4M2Q2MTYzMjNhMzRmNmRmYWFpoqwA: 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: ]] 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZmM2UzZjJmNWM5YjgyMWNjZDAzZTdkZWU2YWY2YjEyYzUwNTFkM2FiNTIyNTVjMjc4MjZkZTJhNTI2MjJkNII++n8=: 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.930 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.499 nvme0n1 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.499 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.067 nvme0n1 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjQ4OGRmYzA2MjhjODg5NTRhZDg2NzBlNjZmYmQ4NmLaE8aN: 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: ]] 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjgxMzJmMGEzMjE2MDRmMDhlNjgxZWFkMTVkN2UzNjgv2XTX: 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.067 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.002 nvme0n1 00:26:31.002 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.002 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.002 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.002 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.002 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.002 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.002 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWJiZmI1MDk1YjJlMDQxYWM3MmRjNDExYmUyM2EyNDA3ZjRiMjdhNGM5YzYzNzhmU7jD7g==: 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: ]] 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODllNDY5NWIzNjdlNTI2MmMxZDZlZDRmYmZjNzU2MzLmdtSD: 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.003 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.571 nvme0n1 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTA1NjEyNWRmYjBlMzcxNGJhNjQ0M2EyY2MxMmE4OGIyZTRhZTk0MDVlOGMzMDMzNWNiNTgyZWQwMGYxZWM3ZnEd6aY=: 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.571 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.140 nvme0n1 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYxM2ZkYmMwZmM5MzYyOWIyYmMzOTAwZTI1YmVhYzc4NjViZjczMzlkMmU1ZjEzzCmmzQ==: 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTNmZjQ0ZDEyZjI4ZTYxOWUxZjdlZGU1ZDg2ZWE1ZDU2ZGUwMmU0MTkxMWM5NDJiwQvtJg==: 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.140 request: 00:26:32.140 { 00:26:32.140 "name": "nvme0", 00:26:32.140 "trtype": "tcp", 00:26:32.140 "traddr": "10.0.0.1", 00:26:32.140 "adrfam": "ipv4", 00:26:32.140 "trsvcid": "4420", 00:26:32.140 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:32.140 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:32.140 "prchk_reftag": false, 00:26:32.140 "prchk_guard": false, 00:26:32.140 "hdgst": false, 00:26:32.140 "ddgst": false, 00:26:32.140 "method": "bdev_nvme_attach_controller", 00:26:32.140 "req_id": 1 00:26:32.140 } 00:26:32.140 Got JSON-RPC error response 00:26:32.140 response: 00:26:32.140 { 00:26:32.140 "code": -5, 00:26:32.140 "message": "Input/output error" 00:26:32.140 } 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.140 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.398 request: 00:26:32.398 { 00:26:32.398 "name": "nvme0", 00:26:32.398 "trtype": "tcp", 00:26:32.398 "traddr": "10.0.0.1", 00:26:32.398 "adrfam": "ipv4", 00:26:32.398 "trsvcid": "4420", 00:26:32.398 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:32.398 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:32.398 "prchk_reftag": false, 00:26:32.398 "prchk_guard": false, 00:26:32.398 "hdgst": false, 00:26:32.398 "ddgst": false, 00:26:32.398 "dhchap_key": "key2", 00:26:32.398 "method": "bdev_nvme_attach_controller", 00:26:32.398 "req_id": 1 00:26:32.398 } 00:26:32.398 Got JSON-RPC error response 00:26:32.398 response: 00:26:32.398 { 00:26:32.398 "code": -5, 00:26:32.398 "message": "Input/output error" 00:26:32.398 } 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:32.398 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.399 request: 00:26:32.399 { 00:26:32.399 "name": "nvme0", 00:26:32.399 "trtype": "tcp", 00:26:32.399 "traddr": "10.0.0.1", 00:26:32.399 "adrfam": "ipv4", 00:26:32.399 "trsvcid": "4420", 00:26:32.399 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:32.399 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:32.399 "prchk_reftag": false, 00:26:32.399 "prchk_guard": false, 00:26:32.399 "hdgst": false, 00:26:32.399 "ddgst": false, 00:26:32.399 "dhchap_key": "key1", 00:26:32.399 "dhchap_ctrlr_key": "ckey2", 00:26:32.399 "method": "bdev_nvme_attach_controller", 00:26:32.399 "req_id": 1 00:26:32.399 } 00:26:32.399 Got JSON-RPC error response 00:26:32.399 response: 00:26:32.399 { 00:26:32.399 "code": -5, 00:26:32.399 "message": "Input/output error" 00:26:32.399 } 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:32.399 rmmod nvme_tcp 00:26:32.399 rmmod nvme_fabrics 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 434978 ']' 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 434978 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 434978 ']' 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 434978 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:32.399 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 434978 00:26:32.657 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:32.657 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:32.657 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 434978' 00:26:32.657 killing process with pid 434978 00:26:32.657 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 434978 00:26:32.657 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 434978 00:26:32.657 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:32.657 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:32.657 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:32.657 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:32.657 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:32.657 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.657 19:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:32.657 19:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.189 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:35.189 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:35.189 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:35.189 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:35.189 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:35.189 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:35.189 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:35.189 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:35.189 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:35.189 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:35.189 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:35.189 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:35.189 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:37.726 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:37.726 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:38.664 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:38.664 19:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.1zC /tmp/spdk.key-null.xMA /tmp/spdk.key-sha256.60Y /tmp/spdk.key-sha384.nEB /tmp/spdk.key-sha512.Pq1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:38.664 19:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:41.200 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:41.200 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:41.200 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:41.200 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:41.200 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:41.200 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:41.200 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:41.200 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:41.200 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:41.200 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:41.200 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:41.200 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:41.200 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:41.201 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:41.460 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:41.460 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:41.460 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:41.460 00:26:41.460 real 0m52.490s 00:26:41.460 user 0m47.118s 00:26:41.460 sys 0m12.352s 00:26:41.460 19:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:41.460 19:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.460 ************************************ 00:26:41.460 END TEST nvmf_auth_host 00:26:41.460 ************************************ 00:26:41.460 19:18:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:41.460 19:18:43 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:26:41.460 19:18:43 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:41.460 19:18:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:41.460 19:18:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:41.460 19:18:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.460 ************************************ 00:26:41.460 START TEST nvmf_digest 00:26:41.460 ************************************ 00:26:41.460 19:18:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:41.720 * Looking for test storage... 00:26:41.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:41.720 19:18:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.290 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.290 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:48.290 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:48.290 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:48.290 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:48.290 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:48.290 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:48.290 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:48.291 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:48.291 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:48.291 Found net devices under 0000:86:00.0: cvl_0_0 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:48.291 Found net devices under 0000:86:00.1: cvl_0_1 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:48.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:26:48.291 00:26:48.291 --- 10.0.0.2 ping statistics --- 00:26:48.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.291 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:26:48.291 00:26:48.291 --- 10.0.0.1 ping statistics --- 00:26:48.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.291 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.291 ************************************ 00:26:48.291 START TEST nvmf_digest_clean 00:26:48.291 ************************************ 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=449032 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 449032 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 449032 ']' 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.291 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:48.292 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.292 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:48.292 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.292 [2024-07-12 19:18:49.969604] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:26:48.292 [2024-07-12 19:18:49.969647] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.292 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.292 [2024-07-12 19:18:50.043565] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.292 [2024-07-12 19:18:50.127563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.292 [2024-07-12 19:18:50.127599] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.292 [2024-07-12 19:18:50.127605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.292 [2024-07-12 19:18:50.127611] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.292 [2024-07-12 19:18:50.127617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.292 [2024-07-12 19:18:50.127634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.292 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.292 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:48.292 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:48.292 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:48.292 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.292 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.292 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:48.292 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:48.292 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:48.292 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.292 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.551 null0 00:26:48.551 [2024-07-12 19:18:50.914520] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.551 [2024-07-12 19:18:50.938677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=449161 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 449161 /var/tmp/bperf.sock 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 449161 ']' 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:48.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:48.551 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.551 [2024-07-12 19:18:50.987266] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:26:48.551 [2024-07-12 19:18:50.987305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449161 ] 00:26:48.551 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.551 [2024-07-12 19:18:51.053944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.809 [2024-07-12 19:18:51.133101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.378 19:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:49.378 19:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:49.378 19:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:49.378 19:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:49.378 19:18:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:49.637 19:18:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:49.637 19:18:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:49.896 nvme0n1 00:26:50.155 19:18:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:50.155 19:18:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:50.155 Running I/O for 2 seconds... 00:26:52.057 00:26:52.057 Latency(us) 00:26:52.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.057 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:52.057 nvme0n1 : 2.01 25612.78 100.05 0.00 0.00 4992.39 2621.44 11226.60 00:26:52.057 =================================================================================================================== 00:26:52.057 Total : 25612.78 100.05 0.00 0.00 4992.39 2621.44 11226.60 00:26:52.057 0 00:26:52.057 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:52.057 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:52.057 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:52.057 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:52.057 | select(.opcode=="crc32c") 00:26:52.057 | "\(.module_name) \(.executed)"' 00:26:52.057 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 449161 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 449161 ']' 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 449161 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 449161 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 449161' 00:26:52.316 killing process with pid 449161 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 449161 00:26:52.316 Received shutdown signal, test time was about 2.000000 seconds 00:26:52.316 00:26:52.316 Latency(us) 00:26:52.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.316 =================================================================================================================== 00:26:52.316 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:52.316 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 449161 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=449854 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 449854 /var/tmp/bperf.sock 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 449854 ']' 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:52.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:52.575 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:52.575 [2024-07-12 19:18:55.064476] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:26:52.575 [2024-07-12 19:18:55.064526] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449854 ] 00:26:52.575 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:52.575 Zero copy mechanism will not be used. 00:26:52.575 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.575 [2024-07-12 19:18:55.129671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.834 [2024-07-12 19:18:55.198562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.400 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:53.400 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:53.400 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:53.400 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:53.400 19:18:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:53.659 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.659 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.918 nvme0n1 00:26:53.918 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:53.918 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:53.918 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:53.918 Zero copy mechanism will not be used. 00:26:53.918 Running I/O for 2 seconds... 00:26:56.453 00:26:56.453 Latency(us) 00:26:56.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.453 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:56.453 nvme0n1 : 2.00 5763.95 720.49 0.00 0.00 2772.71 666.05 5841.25 00:26:56.453 =================================================================================================================== 00:26:56.453 Total : 5763.95 720.49 0.00 0.00 2772.71 666.05 5841.25 00:26:56.453 0 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:56.453 | select(.opcode=="crc32c") 00:26:56.453 | "\(.module_name) \(.executed)"' 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 449854 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 449854 ']' 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 449854 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 449854 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 449854' 00:26:56.453 killing process with pid 449854 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 449854 00:26:56.453 Received shutdown signal, test time was about 2.000000 seconds 00:26:56.453 00:26:56.453 Latency(us) 00:26:56.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.453 =================================================================================================================== 00:26:56.453 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 449854 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=450542 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 450542 /var/tmp/bperf.sock 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:56.453 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 450542 ']' 00:26:56.454 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:56.454 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:56.454 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:56.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:56.454 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:56.454 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:56.454 [2024-07-12 19:18:58.953271] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:26:56.454 [2024-07-12 19:18:58.953322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450542 ] 00:26:56.454 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.454 [2024-07-12 19:18:59.017412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.713 [2024-07-12 19:18:59.097539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.282 19:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.282 19:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:57.282 19:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:57.282 19:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:57.282 19:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:57.541 19:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.541 19:18:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.800 nvme0n1 00:26:57.800 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:57.800 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:58.059 Running I/O for 2 seconds... 00:26:59.966 00:26:59.966 Latency(us) 00:26:59.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.966 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:59.966 nvme0n1 : 2.00 28058.84 109.60 0.00 0.00 4555.66 2293.76 14702.86 00:26:59.966 =================================================================================================================== 00:26:59.966 Total : 28058.84 109.60 0.00 0.00 4555.66 2293.76 14702.86 00:26:59.967 0 00:26:59.967 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:59.967 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:59.967 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:59.967 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:59.967 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:59.967 | select(.opcode=="crc32c") 00:26:59.967 | "\(.module_name) \(.executed)"' 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 450542 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 450542 ']' 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 450542 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 450542 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 450542' 00:27:00.226 killing process with pid 450542 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 450542 00:27:00.226 Received shutdown signal, test time was about 2.000000 seconds 00:27:00.226 00:27:00.226 Latency(us) 00:27:00.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.226 =================================================================================================================== 00:27:00.226 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.226 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 450542 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=451181 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 451181 /var/tmp/bperf.sock 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 451181 ']' 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:00.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:00.484 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:00.484 [2024-07-12 19:19:02.915816] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:27:00.484 [2024-07-12 19:19:02.915864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451181 ] 00:27:00.484 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:00.484 Zero copy mechanism will not be used. 00:27:00.484 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.484 [2024-07-12 19:19:02.983364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.743 [2024-07-12 19:19:03.062414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.311 19:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:01.311 19:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:01.311 19:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:01.311 19:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:01.311 19:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:01.570 19:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.570 19:19:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.829 nvme0n1 00:27:01.829 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:01.829 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:01.829 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:01.829 Zero copy mechanism will not be used. 00:27:01.829 Running I/O for 2 seconds... 00:27:04.365 00:27:04.365 Latency(us) 00:27:04.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.365 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:04.365 nvme0n1 : 2.00 6832.59 854.07 0.00 0.00 2338.03 1503.05 4331.07 00:27:04.365 =================================================================================================================== 00:27:04.365 Total : 6832.59 854.07 0.00 0.00 2338.03 1503.05 4331.07 00:27:04.365 0 00:27:04.365 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:04.365 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:04.365 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:04.365 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:04.365 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:04.365 | select(.opcode=="crc32c") 00:27:04.365 | "\(.module_name) \(.executed)"' 00:27:04.365 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:04.365 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:04.365 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 451181 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 451181 ']' 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 451181 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 451181 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 451181' 00:27:04.366 killing process with pid 451181 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 451181 00:27:04.366 Received shutdown signal, test time was about 2.000000 seconds 00:27:04.366 00:27:04.366 Latency(us) 00:27:04.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.366 =================================================================================================================== 00:27:04.366 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 451181 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 449032 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 449032 ']' 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 449032 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 449032 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 449032' 00:27:04.366 killing process with pid 449032 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 449032 00:27:04.366 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 449032 00:27:04.626 00:27:04.626 real 0m17.100s 00:27:04.626 user 0m32.581s 00:27:04.626 sys 0m4.784s 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:04.626 ************************************ 00:27:04.626 END TEST nvmf_digest_clean 00:27:04.626 ************************************ 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:04.626 ************************************ 00:27:04.626 START TEST nvmf_digest_error 00:27:04.626 ************************************ 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=451865 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 451865 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 451865 ']' 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:04.626 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.626 [2024-07-12 19:19:07.144016] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:27:04.626 [2024-07-12 19:19:07.144060] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.626 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.886 [2024-07-12 19:19:07.215914] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.886 [2024-07-12 19:19:07.293636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.886 [2024-07-12 19:19:07.293672] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.886 [2024-07-12 19:19:07.293679] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.886 [2024-07-12 19:19:07.293686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.886 [2024-07-12 19:19:07.293691] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.886 [2024-07-12 19:19:07.293727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.456 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:05.456 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:05.456 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:05.456 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:05.456 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.456 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.456 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:05.456 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.456 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.456 [2024-07-12 19:19:07.987740] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:05.456 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.456 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:05.456 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:05.456 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.456 19:19:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.715 null0 00:27:05.715 [2024-07-12 19:19:08.081939] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.715 [2024-07-12 19:19:08.106103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=451997 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 451997 /var/tmp/bperf.sock 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 451997 ']' 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:05.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:05.715 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.715 [2024-07-12 19:19:08.156349] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:27:05.715 [2024-07-12 19:19:08.156390] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451997 ] 00:27:05.715 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.715 [2024-07-12 19:19:08.221130] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.974 [2024-07-12 19:19:08.301157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.543 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:06.543 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:06.543 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:06.543 19:19:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:06.802 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:06.802 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.802 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.802 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.802 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:06.802 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.061 nvme0n1 00:27:07.062 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:07.062 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.062 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.062 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.062 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:07.062 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:07.062 Running I/O for 2 seconds... 00:27:07.062 [2024-07-12 19:19:09.620339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.062 [2024-07-12 19:19:09.620371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.062 [2024-07-12 19:19:09.620381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.632791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.632816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.632826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.641035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.641056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.641065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.653475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.653496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.653505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.665932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.665953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.665962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.676254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.676273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.676281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.688717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.688736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.688744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.697600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.697619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.697627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.709454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.709474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.709482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.717743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.717764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.717774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.727918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.727937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.727946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.737758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.737778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.737785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.747410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.747430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.747440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.756600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.756620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.756628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.765373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.765394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.765402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.776034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.776055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.776063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.787202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.787223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.787240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.795679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.795699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.795707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.806120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.806141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.806148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.815442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.815462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.815470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.824939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.824959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.321 [2024-07-12 19:19:09.824967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.321 [2024-07-12 19:19:09.834241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.321 [2024-07-12 19:19:09.834261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-07-12 19:19:09.834268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.322 [2024-07-12 19:19:09.843392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.322 [2024-07-12 19:19:09.843412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-07-12 19:19:09.843420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.322 [2024-07-12 19:19:09.853006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.322 [2024-07-12 19:19:09.853026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-07-12 19:19:09.853034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.322 [2024-07-12 19:19:09.861413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.322 [2024-07-12 19:19:09.861434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-07-12 19:19:09.861443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.322 [2024-07-12 19:19:09.871755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.322 [2024-07-12 19:19:09.871778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-07-12 19:19:09.871786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.322 [2024-07-12 19:19:09.882366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.322 [2024-07-12 19:19:09.882386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-07-12 19:19:09.882394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:09.892217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:09.892242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:09.892250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:09.901666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:09.901686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:09.901694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:09.910281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:09.910302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:09.910309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:09.922392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:09.922412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:09.922420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:09.932272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:09.932292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:09.932300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:09.942339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:09.942359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:09.942367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:09.950965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:09.950986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:09.950994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:09.962829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:09.962850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:09.962858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:09.973629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:09.973649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:09.973657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:09.982030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:09.982050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:09.982058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:09.992155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:09.992175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:09.992183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:10.001851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:10.001872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:10.001880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:10.012322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:10.012343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:10.012352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:10.021055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:10.021075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:10.021084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:10.031656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:10.031682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:10.031695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:10.043785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:10.043807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:10.043820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:10.055013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:10.055034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:10.055042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:10.065245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:10.065266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:10.065275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:10.073843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:10.073864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:10.073873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:10.084779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:10.084800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:10.084808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:10.095559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:10.095579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:10.095587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:10.104105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:10.104126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:10.104134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:10.114914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:10.114934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.582 [2024-07-12 19:19:10.114942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.582 [2024-07-12 19:19:10.124119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.582 [2024-07-12 19:19:10.124139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.583 [2024-07-12 19:19:10.124147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.583 [2024-07-12 19:19:10.135518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.583 [2024-07-12 19:19:10.135544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.583 [2024-07-12 19:19:10.135552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.583 [2024-07-12 19:19:10.143286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.583 [2024-07-12 19:19:10.143306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.583 [2024-07-12 19:19:10.143314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.870 [2024-07-12 19:19:10.155218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.870 [2024-07-12 19:19:10.155245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.870 [2024-07-12 19:19:10.155254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.870 [2024-07-12 19:19:10.165108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.870 [2024-07-12 19:19:10.165128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.870 [2024-07-12 19:19:10.165136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.870 [2024-07-12 19:19:10.173130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.870 [2024-07-12 19:19:10.173150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.870 [2024-07-12 19:19:10.173158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.870 [2024-07-12 19:19:10.183963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.870 [2024-07-12 19:19:10.183982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.870 [2024-07-12 19:19:10.183990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.870 [2024-07-12 19:19:10.192924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.870 [2024-07-12 19:19:10.192943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.870 [2024-07-12 19:19:10.192952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.870 [2024-07-12 19:19:10.202778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.870 [2024-07-12 19:19:10.202797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.870 [2024-07-12 19:19:10.202806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.870 [2024-07-12 19:19:10.211289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.870 [2024-07-12 19:19:10.211308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.870 [2024-07-12 19:19:10.211320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.870 [2024-07-12 19:19:10.221615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.870 [2024-07-12 19:19:10.221634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.870 [2024-07-12 19:19:10.221641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.870 [2024-07-12 19:19:10.231387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.870 [2024-07-12 19:19:10.231407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.870 [2024-07-12 19:19:10.231415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.870 [2024-07-12 19:19:10.240474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.240492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.240500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.250084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.250103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.250111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.260054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.260074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.260082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.269561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.269580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.269588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.277592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.277613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.277621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.287893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.287914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.287922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.298049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.298072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.298080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.309982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.310002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.310010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.318339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.318358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.318366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.329895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.329915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.329923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.339990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.340009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.340017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.349007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.349027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.349034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.359058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.359078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.359085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.368428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.368447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.368455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.377398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.377417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.377424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.386948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.386967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.386975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.397161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.397182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.397190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.871 [2024-07-12 19:19:10.405478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:07.871 [2024-07-12 19:19:10.405498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.871 [2024-07-12 19:19:10.405506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.130 [2024-07-12 19:19:10.416252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.130 [2024-07-12 19:19:10.416273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.130 [2024-07-12 19:19:10.416282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.130 [2024-07-12 19:19:10.425790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.130 [2024-07-12 19:19:10.425808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.130 [2024-07-12 19:19:10.425817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.130 [2024-07-12 19:19:10.435037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.130 [2024-07-12 19:19:10.435056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.130 [2024-07-12 19:19:10.435064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.130 [2024-07-12 19:19:10.443381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.130 [2024-07-12 19:19:10.443401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.130 [2024-07-12 19:19:10.443409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.130 [2024-07-12 19:19:10.454529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.130 [2024-07-12 19:19:10.454549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.130 [2024-07-12 19:19:10.454557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.130 [2024-07-12 19:19:10.464348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.130 [2024-07-12 19:19:10.464367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.130 [2024-07-12 19:19:10.464379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.130 [2024-07-12 19:19:10.476039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.130 [2024-07-12 19:19:10.476058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.130 [2024-07-12 19:19:10.476067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.130 [2024-07-12 19:19:10.487177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.130 [2024-07-12 19:19:10.487197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.130 [2024-07-12 19:19:10.487204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.130 [2024-07-12 19:19:10.496057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.130 [2024-07-12 19:19:10.496076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.130 [2024-07-12 19:19:10.496084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.130 [2024-07-12 19:19:10.506977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.130 [2024-07-12 19:19:10.506996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.507004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.515819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.515837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.515845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.525130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.525148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.525156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.533627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.533646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.533654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.543090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.543110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.543117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.553309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.553332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.553340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.562349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.562368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.562375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.570828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.570847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.570855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.580326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.580345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.580353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.589557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.589575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.589583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.599879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.599898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.599906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.608887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.608905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.608913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.617449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.617468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.617475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.629819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.629838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.629846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.642518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.642538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.642547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.654106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.654126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.654133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.663228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.663247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.663255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.675325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.675345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.675353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.683603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.683622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.683629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.131 [2024-07-12 19:19:10.695463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.131 [2024-07-12 19:19:10.695481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.131 [2024-07-12 19:19:10.695489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.389 [2024-07-12 19:19:10.706453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.389 [2024-07-12 19:19:10.706474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.389 [2024-07-12 19:19:10.706482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.389 [2024-07-12 19:19:10.716916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.389 [2024-07-12 19:19:10.716935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.389 [2024-07-12 19:19:10.716943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.389 [2024-07-12 19:19:10.725479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.389 [2024-07-12 19:19:10.725498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.389 [2024-07-12 19:19:10.725514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.389 [2024-07-12 19:19:10.737467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.389 [2024-07-12 19:19:10.737487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.389 [2024-07-12 19:19:10.737495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.389 [2024-07-12 19:19:10.745583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.389 [2024-07-12 19:19:10.745602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.389 [2024-07-12 19:19:10.745610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.757483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.757503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.757511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.768753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.768772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.768780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.777378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.777397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.777405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.787987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.788006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.788014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.797728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.797747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.797755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.806977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.806996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.807004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.816515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.816537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.816545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.825160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.825179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.825187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.835656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.835676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.835684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.845260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.845280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.845288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.854009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.854028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.854036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.863972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.863992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.864000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.873037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.873056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.873064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.882817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.882835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.882843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.892291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.892310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.892318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.902353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.902372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.902380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.912283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.912303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.912311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.920527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.920546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.920554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.932337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.932356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.932363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.940968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.940988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.940995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.390 [2024-07-12 19:19:10.953216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.390 [2024-07-12 19:19:10.953241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.390 [2024-07-12 19:19:10.953249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.648 [2024-07-12 19:19:10.961667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:10.961688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:10.961696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:10.971771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:10.971790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:10.971799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:10.981304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:10.981324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:10.981335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:10.989960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:10.989979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:10.989987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:10.999803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:10.999822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:10.999830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.009741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.009762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.009771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.018237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.018257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.018265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.029621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.029640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.029648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.040970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.040989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.040997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.049913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.049932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.049940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.061341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.061360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.061368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.073682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.073702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.073710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.085737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.085756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.085765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.097581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.097600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.097607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.108448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.108468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.108476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.118696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.118714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.118722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.127832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.127851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.127859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.136285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.136305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.136313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.148248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.148270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.148278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.157271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.157291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.157302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.168403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.168424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.168433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.178481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.178502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.178510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.187014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.187036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.187044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.198251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.198271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.198279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.649 [2024-07-12 19:19:11.206717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.649 [2024-07-12 19:19:11.206736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.649 [2024-07-12 19:19:11.206744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.908 [2024-07-12 19:19:11.216951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.908 [2024-07-12 19:19:11.216971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.908 [2024-07-12 19:19:11.216980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.908 [2024-07-12 19:19:11.226492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.908 [2024-07-12 19:19:11.226513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.908 [2024-07-12 19:19:11.226522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.908 [2024-07-12 19:19:11.235034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.908 [2024-07-12 19:19:11.235054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.908 [2024-07-12 19:19:11.235062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.908 [2024-07-12 19:19:11.245436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.908 [2024-07-12 19:19:11.245460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.908 [2024-07-12 19:19:11.245468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.908 [2024-07-12 19:19:11.255412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.908 [2024-07-12 19:19:11.255432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.908 [2024-07-12 19:19:11.255440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.908 [2024-07-12 19:19:11.263541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.908 [2024-07-12 19:19:11.263561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.908 [2024-07-12 19:19:11.263569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.908 [2024-07-12 19:19:11.274166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.274186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.274193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.283148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.283168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.283175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.293138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.293158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.293166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.301747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.301767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.301776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.311144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.311166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.311174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.321635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.321656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.321665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.332524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.332545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.332553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.341976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.341996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.342004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.350196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.350216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.350229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.361714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.361734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.361742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.371539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.371559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.371566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.382440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.382461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.382469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.390725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.390744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.390752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.403087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.403116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.403124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.414176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.414197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.414211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.422102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.422122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.422131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.434418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.434438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.434446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.446323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.446344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.446352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.458591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.458610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.458618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.909 [2024-07-12 19:19:11.467037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:08.909 [2024-07-12 19:19:11.467057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.909 [2024-07-12 19:19:11.467064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.477674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.477695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.477705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.487568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.487588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.487597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.496349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.496369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.496378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.508089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.508113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.508121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.519546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.519568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.519575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.529411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.529431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.529439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.538672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.538690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.538698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.548087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.548107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.548115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.557915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.557934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.557942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.566424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.566445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.566452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.577604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.577624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.577632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.586192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.586211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.586218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.595911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.595931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.595939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.606335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.606354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.606362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 [2024-07-12 19:19:11.614561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x638f20) 00:27:09.167 [2024-07-12 19:19:11.614580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.167 [2024-07-12 19:19:11.614587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.167 00:27:09.167 Latency(us) 00:27:09.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.167 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:09.167 nvme0n1 : 2.00 25607.24 100.03 0.00 0.00 4991.99 2578.70 16526.47 00:27:09.167 =================================================================================================================== 00:27:09.167 Total : 25607.24 100.03 0.00 0.00 4991.99 2578.70 16526.47 00:27:09.167 0 00:27:09.167 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:09.167 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:09.167 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:09.167 | .driver_specific 00:27:09.167 | .nvme_error 00:27:09.167 | .status_code 00:27:09.167 | .command_transient_transport_error' 00:27:09.167 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:09.425 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 201 > 0 )) 00:27:09.425 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 451997 00:27:09.425 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 451997 ']' 00:27:09.425 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 451997 00:27:09.425 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:09.425 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:09.425 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 451997 00:27:09.425 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:09.425 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:09.425 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 451997' 00:27:09.425 killing process with pid 451997 00:27:09.425 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 451997 00:27:09.425 Received shutdown signal, test time was about 2.000000 seconds 00:27:09.425 00:27:09.425 Latency(us) 00:27:09.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.425 =================================================================================================================== 00:27:09.425 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.425 19:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 451997 00:27:09.682 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:09.682 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:09.682 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:09.682 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:09.682 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:09.682 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=452690 00:27:09.682 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 452690 /var/tmp/bperf.sock 00:27:09.682 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:09.682 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 452690 ']' 00:27:09.682 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:09.682 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:09.682 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:09.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:09.682 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:09.682 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.682 [2024-07-12 19:19:12.094426] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:27:09.682 [2024-07-12 19:19:12.094474] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452690 ] 00:27:09.682 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:09.682 Zero copy mechanism will not be used. 00:27:09.682 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.683 [2024-07-12 19:19:12.160118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.683 [2024-07-12 19:19:12.227882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.614 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:10.614 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:10.614 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:10.614 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:10.614 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:10.614 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.614 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.614 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.614 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:10.614 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:10.872 nvme0n1 00:27:10.872 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:10.872 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.872 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.132 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.132 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:11.132 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:11.132 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:11.132 Zero copy mechanism will not be used. 00:27:11.132 Running I/O for 2 seconds... 00:27:11.132 [2024-07-12 19:19:13.534997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.535032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.535043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.540753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.540779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.540789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.547193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.547214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.547222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.552536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.552558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.552567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.559345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.559367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.559376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.566538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.566559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.566568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.573427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.573453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.573461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.580861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.580883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.580891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.588061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.588084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.588092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.594456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.594479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.594487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.601992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.602013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.602022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.609188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.609210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.609218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.615400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.615421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.615429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.621814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.621836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.621843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.627204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.132 [2024-07-12 19:19:13.627231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.132 [2024-07-12 19:19:13.627239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.132 [2024-07-12 19:19:13.632715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.133 [2024-07-12 19:19:13.632736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.133 [2024-07-12 19:19:13.632744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.133 [2024-07-12 19:19:13.638448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.133 [2024-07-12 19:19:13.638468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.133 [2024-07-12 19:19:13.638476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.133 [2024-07-12 19:19:13.644075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.133 [2024-07-12 19:19:13.644095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.133 [2024-07-12 19:19:13.644103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.133 [2024-07-12 19:19:13.649694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.133 [2024-07-12 19:19:13.649715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.133 [2024-07-12 19:19:13.649723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.133 [2024-07-12 19:19:13.655107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.133 [2024-07-12 19:19:13.655130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.133 [2024-07-12 19:19:13.655138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.133 [2024-07-12 19:19:13.660999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.133 [2024-07-12 19:19:13.661020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.133 [2024-07-12 19:19:13.661028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.133 [2024-07-12 19:19:13.667127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.133 [2024-07-12 19:19:13.667148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.133 [2024-07-12 19:19:13.667156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.133 [2024-07-12 19:19:13.672763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.133 [2024-07-12 19:19:13.672784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.133 [2024-07-12 19:19:13.672792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.133 [2024-07-12 19:19:13.676721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.133 [2024-07-12 19:19:13.676741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.133 [2024-07-12 19:19:13.676752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.133 [2024-07-12 19:19:13.681443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.133 [2024-07-12 19:19:13.681464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.133 [2024-07-12 19:19:13.681471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.133 [2024-07-12 19:19:13.686836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.133 [2024-07-12 19:19:13.686856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.133 [2024-07-12 19:19:13.686864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.133 [2024-07-12 19:19:13.691636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.133 [2024-07-12 19:19:13.691655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.133 [2024-07-12 19:19:13.691663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.133 [2024-07-12 19:19:13.696962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.133 [2024-07-12 19:19:13.696983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.133 [2024-07-12 19:19:13.696991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.702442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.702463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.702471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.708268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.708288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.708296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.713602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.713623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.713630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.719066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.719086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.719093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.724663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.724684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.724692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.729908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.729929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.729937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.735306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.735327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.735334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.740620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.740640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.740648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.746041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.746062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.746070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.751436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.751456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.751464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.756724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.756744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.756752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.762182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.762202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.762210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.767638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.767658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.767669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.773306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.773325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.773333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.778928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.778948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.778956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.784253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.784273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.784281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.789562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.789582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.789590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.392 [2024-07-12 19:19:13.795103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.392 [2024-07-12 19:19:13.795125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.392 [2024-07-12 19:19:13.795133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.800756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.800778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.800785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.806235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.806255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.806264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.811625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.811646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.811654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.816781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.816806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.816814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.822235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.822257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.822265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.827620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.827641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.827650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.833141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.833162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.833171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.838593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.838614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.838622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.843968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.843990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.843998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.849402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.849423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.849431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.854844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.854865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.854873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.860545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.860566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.860575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.866021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.866042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.866050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.871477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.871498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.871506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.877163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.877184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.877191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.882635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.882656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.882664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.888091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.888113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.888120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.893822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.893843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.893851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.899458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.899481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.899489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.904954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.904975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.904982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.910374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.910394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.910404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.915892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.915913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.915921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.921256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.921277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.921284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.926707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.926728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.926735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.932170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.932191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.932199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.937605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.937626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.937634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.943045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.943065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.943073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.948403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.948424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.393 [2024-07-12 19:19:13.948432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.393 [2024-07-12 19:19:13.953925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.393 [2024-07-12 19:19:13.953944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.394 [2024-07-12 19:19:13.953952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:13.959345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:13.959370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:13.959378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:13.964794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:13.964815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:13.964823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:13.970119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:13.970139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:13.970147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:13.975500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:13.975521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:13.975529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:13.980961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:13.980982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:13.980990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:13.986310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:13.986331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:13.986339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:13.991805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:13.991827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:13.991836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:13.997661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:13.997684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:13.997692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:14.003543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:14.003564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:14.003572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:14.010521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:14.010544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:14.010553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:14.017990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:14.018012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:14.018020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:14.024406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:14.024428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:14.024436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:14.030833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:14.030854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:14.030862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:14.036453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:14.036476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:14.036483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:14.041993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:14.042013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:14.042021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:14.047422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.653 [2024-07-12 19:19:14.047446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.653 [2024-07-12 19:19:14.047455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.653 [2024-07-12 19:19:14.052860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.052881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.052889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.058391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.058412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.058424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.063590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.063611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.063619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.068858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.068879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.068887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.074230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.074251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.074259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.079714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.079735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.079742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.085393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.085414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.085422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.090991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.091012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.091020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.096381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.096403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.096411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.101796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.101816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.101823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.107150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.107171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.107179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.112749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.112770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.112778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.118274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.118296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.118304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.123823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.123844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.123852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.129365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.129386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.129394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.134987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.135008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.135016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.140394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.140415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.140423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.145728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.145748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.145756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.151076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.151097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.151108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.156563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.156584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.156592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.161988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.162009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.162016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.167561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.167582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.167591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.173137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.173158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.173166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.178403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.178423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.178431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.183812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.183834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.183842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.189065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.189086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.189093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.194210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.194236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.194245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.654 [2024-07-12 19:19:14.199342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.654 [2024-07-12 19:19:14.199366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.654 [2024-07-12 19:19:14.199374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.655 [2024-07-12 19:19:14.204458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.655 [2024-07-12 19:19:14.204479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.655 [2024-07-12 19:19:14.204487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.655 [2024-07-12 19:19:14.209532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.655 [2024-07-12 19:19:14.209553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.655 [2024-07-12 19:19:14.209561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.655 [2024-07-12 19:19:14.214691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.655 [2024-07-12 19:19:14.214710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.655 [2024-07-12 19:19:14.214717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.220006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.220028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.220037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.225377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.225398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.225406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.230612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.230634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.230643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.236011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.236032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.236040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.241364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.241384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.241392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.246683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.246704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.246712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.251964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.251985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.251993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.257267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.257288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.257296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.262569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.262591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.262599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.268199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.268219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.268234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.273671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.273692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.273699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.279172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.279193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.279202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.284599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.284619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.284626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.289936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.289956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.289968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.295316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.295338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.295345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.300663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.300685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.300692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.306218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.306247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.306256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.311984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.312005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.312014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.318029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.318051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.318059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.324985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.325007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.325016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.332582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.332604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.332612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.339155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.916 [2024-07-12 19:19:14.339178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.916 [2024-07-12 19:19:14.339185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.916 [2024-07-12 19:19:14.345991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.346017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.346024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.352139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.352165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.352173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.358286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.358308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.358316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.365500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.365522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.365531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.372360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.372381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.372390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.378715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.378736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.378744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.384371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.384392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.384400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.389713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.389735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.389743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.394959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.394982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.394994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.400235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.400257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.400265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.405527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.405548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.405557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.410842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.410865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.410873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.416641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.416664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.416672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.422536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.422558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.422567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.427852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.427873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.427881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.433052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.433074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.433082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.438296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.438317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.438325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.443483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.443507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.443515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.448589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.448611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.448619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.453809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.453830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.453838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.459277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.459298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.459305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.464832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.464853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.464861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.470441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.470462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.470470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.475993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.476014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.476021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.917 [2024-07-12 19:19:14.481592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:11.917 [2024-07-12 19:19:14.481613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.917 [2024-07-12 19:19:14.481621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.487138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.487160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.487168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.492781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.492801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.492809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.498248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.498269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.498277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.503675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.503696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.503704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.509004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.509023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.509030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.514344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.514364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.514372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.519699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.519719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.519727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.525002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.525023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.525031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.530343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.530364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.530372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.533869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.533889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.533901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.538199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.538220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.538235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.543354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.543376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.543384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.548485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.548506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.548514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.553798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.553819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.553827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.559094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.559115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.559123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.564435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.564456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.564463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.569772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.569793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.569801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.575109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.575129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.575137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.580459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.580483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.580491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.585830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.585851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.585858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.591166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.591186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.591194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.596481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.596501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.596509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.601869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.601888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.601896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.607177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.607198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.607206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.612560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.612581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.612589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.617945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.617966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.617974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.623136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.623156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.179 [2024-07-12 19:19:14.623164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.179 [2024-07-12 19:19:14.628290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.179 [2024-07-12 19:19:14.628310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.628318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.633421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.633441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.633449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.638592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.638613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.638620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.643796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.643816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.643825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.649000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.649019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.649027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.654407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.654427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.654435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.659762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.659782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.659790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.665108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.665128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.665136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.670460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.670480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.670491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.675878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.675898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.675906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.681264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.681283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.681291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.686603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.686624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.686631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.691962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.691982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.691990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.697294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.697314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.697321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.702613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.702634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.702641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.707994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.708014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.708022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.713379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.713399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.713407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.718681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.718701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.718709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.724058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.724079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.724087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.729417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.729437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.729445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.734764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.734785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.734792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.180 [2024-07-12 19:19:14.740148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.180 [2024-07-12 19:19:14.740169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.180 [2024-07-12 19:19:14.740177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.440 [2024-07-12 19:19:14.745561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.440 [2024-07-12 19:19:14.745584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.440 [2024-07-12 19:19:14.745592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.440 [2024-07-12 19:19:14.750952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.440 [2024-07-12 19:19:14.750973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.440 [2024-07-12 19:19:14.750981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.440 [2024-07-12 19:19:14.756372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.440 [2024-07-12 19:19:14.756392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.440 [2024-07-12 19:19:14.756400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.440 [2024-07-12 19:19:14.761713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.440 [2024-07-12 19:19:14.761734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.440 [2024-07-12 19:19:14.761746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.440 [2024-07-12 19:19:14.767112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.440 [2024-07-12 19:19:14.767132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.440 [2024-07-12 19:19:14.767140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.440 [2024-07-12 19:19:14.772483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.440 [2024-07-12 19:19:14.772503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.440 [2024-07-12 19:19:14.772511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.440 [2024-07-12 19:19:14.777854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.440 [2024-07-12 19:19:14.777874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.440 [2024-07-12 19:19:14.777881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.440 [2024-07-12 19:19:14.783727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.440 [2024-07-12 19:19:14.783748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.440 [2024-07-12 19:19:14.783755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.440 [2024-07-12 19:19:14.789351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.440 [2024-07-12 19:19:14.789371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.440 [2024-07-12 19:19:14.789379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.440 [2024-07-12 19:19:14.794787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.440 [2024-07-12 19:19:14.794808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.440 [2024-07-12 19:19:14.794815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.440 [2024-07-12 19:19:14.800132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.440 [2024-07-12 19:19:14.800153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.440 [2024-07-12 19:19:14.800160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.440 [2024-07-12 19:19:14.805520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.440 [2024-07-12 19:19:14.805541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.440 [2024-07-12 19:19:14.805549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.440 [2024-07-12 19:19:14.810911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.440 [2024-07-12 19:19:14.810940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.440 [2024-07-12 19:19:14.810948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.440 [2024-07-12 19:19:14.816324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.440 [2024-07-12 19:19:14.816343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.440 [2024-07-12 19:19:14.816353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.821657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.821678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.821685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.826998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.827020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.827027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.832323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.832342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.832350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.837607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.837628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.837635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.842925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.842945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.842953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.848272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.848293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.848300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.853610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.853631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.853638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.858959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.858979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.858987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.864319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.864348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.864356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.869725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.869746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.869754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.875072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.875093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.875100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.880388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.880410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.880418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.885705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.885726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.885734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.891055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.891076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.891084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.896399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.896419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.896427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.901766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.901786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.901797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.907096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.907117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.907125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.912407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.912428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.912436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.917754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.917774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.917782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.923137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.923158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.923166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.928781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.928802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.928809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.935898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.935919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.935927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.943179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.943201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.943208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.949086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.949106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.949113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.955187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.955209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.955217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.960900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.960920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.960927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.967272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.967292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.967300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.974832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.974854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.974863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.441 [2024-07-12 19:19:14.981548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.441 [2024-07-12 19:19:14.981570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.441 [2024-07-12 19:19:14.981578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.442 [2024-07-12 19:19:14.987848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.442 [2024-07-12 19:19:14.987869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.442 [2024-07-12 19:19:14.987876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.442 [2024-07-12 19:19:14.993386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.442 [2024-07-12 19:19:14.993407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.442 [2024-07-12 19:19:14.993414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.442 [2024-07-12 19:19:14.996630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.442 [2024-07-12 19:19:14.996649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.442 [2024-07-12 19:19:14.996657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.442 [2024-07-12 19:19:15.003405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.442 [2024-07-12 19:19:15.003426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.442 [2024-07-12 19:19:15.003437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.011028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.011052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.011060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.018114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.018135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.018143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.024788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.024809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.024818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.032588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.032610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.032618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.040415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.040436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.040444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.048109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.048129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.048137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.055188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.055210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.055218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.062596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.062619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.062627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.070587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.070613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.070621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.079089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.079110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.079118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.087217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.087244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.087253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.095002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.095023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.095031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.103174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.103195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.103204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.110880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.110902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.110911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.118844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.118866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.118875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.127222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.127251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.127259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.134298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.134320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.134328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.140594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.140615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.140623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.145975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.145996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.146004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.151350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.151370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.151378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.156725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.156746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.702 [2024-07-12 19:19:15.156754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.702 [2024-07-12 19:19:15.162057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.702 [2024-07-12 19:19:15.162078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.162085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.167394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.167415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.167422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.172744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.172764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.172772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.178121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.178142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.178149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.183465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.183485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.183497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.188814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.188834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.188842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.194120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.194141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.194149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.199477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.199497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.199505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.204854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.204874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.204882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.210233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.210254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.210262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.215604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.215625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.215633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.220940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.220960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.220968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.226334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.226356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.226363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.231667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.231691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.231698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.237024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.237044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.237052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.242401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.242421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.242429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.247814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.247835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.247843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.252874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.252895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.252903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.258167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.258187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.258195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.703 [2024-07-12 19:19:15.263270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.703 [2024-07-12 19:19:15.263290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.703 [2024-07-12 19:19:15.263297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.963 [2024-07-12 19:19:15.268396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.963 [2024-07-12 19:19:15.268418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.963 [2024-07-12 19:19:15.268426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.963 [2024-07-12 19:19:15.273552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.963 [2024-07-12 19:19:15.273573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.963 [2024-07-12 19:19:15.273581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.963 [2024-07-12 19:19:15.278820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.963 [2024-07-12 19:19:15.278840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.963 [2024-07-12 19:19:15.278848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.963 [2024-07-12 19:19:15.284109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.963 [2024-07-12 19:19:15.284130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.963 [2024-07-12 19:19:15.284138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.289421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.289442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.289450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.294755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.294775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.294783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.300176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.300196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.300204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.305467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.305488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.305495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.310846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.310868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.310876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.316203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.316223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.316236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.321517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.321538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.321549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.326577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.326598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.326608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.331616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.331636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.331644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.336708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.336729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.336736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.341759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.341779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.341787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.346860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.346881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.346889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.351931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.351951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.351959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.357097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.357117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.357125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.362273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.362295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.362303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.367442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.367464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.367474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.372805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.372827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.372835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.378113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.378135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.378143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.383466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.383487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.383494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.388814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.388834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.388842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.394145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.394165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.394173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.399460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.399480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.399488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.404809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.404830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.404838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.410181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.410201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.410214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.415520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.415540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.415549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.420835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.420856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.420864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.426204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.426232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.426240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.431549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.431571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.964 [2024-07-12 19:19:15.431579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.964 [2024-07-12 19:19:15.436920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.964 [2024-07-12 19:19:15.436941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.436949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.442231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.442251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.442259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.447517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.447538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.447546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.452909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.452937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.452945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.458246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.458271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.458279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.463595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.463617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.463626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.469028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.469049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.469057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.474433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.474455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.474463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.479800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.479822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.479830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.485181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.485202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.485210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.490623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.490644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.490651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.495990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.496011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.496021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.501338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.501359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.501367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.506615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.506636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.506645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.511949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.511972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.511980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.517271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.517292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.517300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.522638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.522660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.522668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.965 [2024-07-12 19:19:15.527922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198d0b0) 00:27:12.965 [2024-07-12 19:19:15.527943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.965 [2024-07-12 19:19:15.527951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.225 00:27:13.225 Latency(us) 00:27:13.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.225 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:13.225 nvme0n1 : 2.00 5527.61 690.95 0.00 0.00 2891.91 662.48 9118.05 00:27:13.225 =================================================================================================================== 00:27:13.225 Total : 5527.61 690.95 0.00 0.00 2891.91 662.48 9118.05 00:27:13.225 0 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:13.225 | .driver_specific 00:27:13.225 | .nvme_error 00:27:13.225 | .status_code 00:27:13.225 | .command_transient_transport_error' 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 356 > 0 )) 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 452690 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 452690 ']' 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 452690 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 452690 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 452690' 00:27:13.225 killing process with pid 452690 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 452690 00:27:13.225 Received shutdown signal, test time was about 2.000000 seconds 00:27:13.225 00:27:13.225 Latency(us) 00:27:13.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.225 =================================================================================================================== 00:27:13.225 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:13.225 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 452690 00:27:13.484 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:13.484 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:13.484 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:13.484 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:13.484 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:13.484 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=453388 00:27:13.484 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 453388 /var/tmp/bperf.sock 00:27:13.484 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:13.484 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 453388 ']' 00:27:13.484 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:13.484 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.484 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:13.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:13.484 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.484 19:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.484 [2024-07-12 19:19:16.010530] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:27:13.484 [2024-07-12 19:19:16.010585] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453388 ] 00:27:13.484 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.743 [2024-07-12 19:19:16.076706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.743 [2024-07-12 19:19:16.156141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.309 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:14.309 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:14.309 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:14.309 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:14.567 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:14.567 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.567 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.567 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.567 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:14.567 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.134 nvme0n1 00:27:15.134 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:15.134 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.134 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.134 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.134 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:15.134 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:15.134 Running I/O for 2 seconds... 00:27:15.134 [2024-07-12 19:19:17.547147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ee5c8 00:27:15.134 [2024-07-12 19:19:17.547916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.547945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.555850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fac10 00:27:15.134 [2024-07-12 19:19:17.556630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.556651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.565435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190eaef0 00:27:15.134 [2024-07-12 19:19:17.566321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.566341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.576822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e6300 00:27:15.134 [2024-07-12 19:19:17.578329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.578347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.583261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f8e88 00:27:15.134 [2024-07-12 19:19:17.583927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.583945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.592765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ea680 00:27:15.134 [2024-07-12 19:19:17.593192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.593210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.602324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e0ea0 00:27:15.134 [2024-07-12 19:19:17.602863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.602881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.611803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fe720 00:27:15.134 [2024-07-12 19:19:17.612470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.612488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.620430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f0350 00:27:15.134 [2024-07-12 19:19:17.621681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.621699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.628253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e8088 00:27:15.134 [2024-07-12 19:19:17.628878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.628896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.637760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fd208 00:27:15.134 [2024-07-12 19:19:17.638513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.638530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.647334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e7818 00:27:15.134 [2024-07-12 19:19:17.648205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.648222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.658559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ed4e8 00:27:15.134 [2024-07-12 19:19:17.660027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.660045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.664997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ef6a8 00:27:15.134 [2024-07-12 19:19:17.665647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.665677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.676277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ea680 00:27:15.134 [2024-07-12 19:19:17.677491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.677509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.683941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190de8a8 00:27:15.134 [2024-07-12 19:19:17.684470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.134 [2024-07-12 19:19:17.684489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.134 [2024-07-12 19:19:17.693500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fef90 00:27:15.134 [2024-07-12 19:19:17.694143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.135 [2024-07-12 19:19:17.694161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:15.394 [2024-07-12 19:19:17.702190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190eaab8 00:27:15.394 [2024-07-12 19:19:17.703188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.394 [2024-07-12 19:19:17.703205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:15.394 [2024-07-12 19:19:17.713507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f0350 00:27:15.394 [2024-07-12 19:19:17.715084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.394 [2024-07-12 19:19:17.715103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:15.394 [2024-07-12 19:19:17.719954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e3060 00:27:15.394 [2024-07-12 19:19:17.720704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.394 [2024-07-12 19:19:17.720722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:15.394 [2024-07-12 19:19:17.729371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f7100 00:27:15.394 [2024-07-12 19:19:17.729871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.394 [2024-07-12 19:19:17.729889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:15.394 [2024-07-12 19:19:17.738863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fd208 00:27:15.394 [2024-07-12 19:19:17.739490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.394 [2024-07-12 19:19:17.739507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:15.394 [2024-07-12 19:19:17.748412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f96f8 00:27:15.394 [2024-07-12 19:19:17.749159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.394 [2024-07-12 19:19:17.749177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:15.394 [2024-07-12 19:19:17.756963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190eff18 00:27:15.394 [2024-07-12 19:19:17.758286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.394 [2024-07-12 19:19:17.758304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:15.394 [2024-07-12 19:19:17.764771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ff3c8 00:27:15.395 [2024-07-12 19:19:17.765485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.765503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.774298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f2948 00:27:15.395 [2024-07-12 19:19:17.775146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.775163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.783795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fd208 00:27:15.395 [2024-07-12 19:19:17.784751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.784769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.795281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fbcf0 00:27:15.395 [2024-07-12 19:19:17.796837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.796855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.801679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e27f0 00:27:15.395 [2024-07-12 19:19:17.802418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.802436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.811201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190df118 00:27:15.395 [2024-07-12 19:19:17.811952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.811972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.820617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e0630 00:27:15.395 [2024-07-12 19:19:17.821362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.821380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.829845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f2510 00:27:15.395 [2024-07-12 19:19:17.830557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.830576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.838932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e12d8 00:27:15.395 [2024-07-12 19:19:17.839661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.839679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.848080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f96f8 00:27:15.395 [2024-07-12 19:19:17.848798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.848816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.857134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ee190 00:27:15.395 [2024-07-12 19:19:17.857849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.857866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.866253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ef270 00:27:15.395 [2024-07-12 19:19:17.866981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.866999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.875349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f3a28 00:27:15.395 [2024-07-12 19:19:17.876055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.876073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.884477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ec408 00:27:15.395 [2024-07-12 19:19:17.885209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.885231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.893614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e9168 00:27:15.395 [2024-07-12 19:19:17.894320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.894338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.902676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e8088 00:27:15.395 [2024-07-12 19:19:17.903391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.903412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.911785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fb048 00:27:15.395 [2024-07-12 19:19:17.912505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.912524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.920927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ff3c8 00:27:15.395 [2024-07-12 19:19:17.921669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.921687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.929963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190de038 00:27:15.395 [2024-07-12 19:19:17.930701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.930719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.939065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fe2e8 00:27:15.395 [2024-07-12 19:19:17.939778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.939795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.948200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e3498 00:27:15.395 [2024-07-12 19:19:17.948936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.948954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.395 [2024-07-12 19:19:17.957275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190de470 00:27:15.395 [2024-07-12 19:19:17.958005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.395 [2024-07-12 19:19:17.958022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.656 [2024-07-12 19:19:17.966459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e5ec8 00:27:15.656 [2024-07-12 19:19:17.967169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.656 [2024-07-12 19:19:17.967187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.656 [2024-07-12 19:19:17.975576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e01f8 00:27:15.656 [2024-07-12 19:19:17.976288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.656 [2024-07-12 19:19:17.976307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.656 [2024-07-12 19:19:17.984692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e1710 00:27:15.656 [2024-07-12 19:19:17.985444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.656 [2024-07-12 19:19:17.985462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.656 [2024-07-12 19:19:17.993875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f9b30 00:27:15.656 [2024-07-12 19:19:17.994617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.656 [2024-07-12 19:19:17.994634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.656 [2024-07-12 19:19:18.002937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f8a50 00:27:15.656 [2024-07-12 19:19:18.003650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.656 [2024-07-12 19:19:18.003667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.656 [2024-07-12 19:19:18.012018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190eee38 00:27:15.656 [2024-07-12 19:19:18.012732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.656 [2024-07-12 19:19:18.012750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.656 [2024-07-12 19:19:18.021153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e6300 00:27:15.656 [2024-07-12 19:19:18.021867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.656 [2024-07-12 19:19:18.021885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.656 [2024-07-12 19:19:18.030245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f81e0 00:27:15.656 [2024-07-12 19:19:18.030983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.656 [2024-07-12 19:19:18.031001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.656 [2024-07-12 19:19:18.039367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190eb760 00:27:15.656 [2024-07-12 19:19:18.040076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.656 [2024-07-12 19:19:18.040093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.656 [2024-07-12 19:19:18.048472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e84c0 00:27:15.656 [2024-07-12 19:19:18.049178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.656 [2024-07-12 19:19:18.049196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.656 [2024-07-12 19:19:18.057540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fd208 00:27:15.656 [2024-07-12 19:19:18.058282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.656 [2024-07-12 19:19:18.058300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.656 [2024-07-12 19:19:18.066910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fa3a0 00:27:15.656 [2024-07-12 19:19:18.067644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.656 [2024-07-12 19:19:18.067662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.075995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fe720 00:27:15.657 [2024-07-12 19:19:18.076706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.076724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.085148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f2d80 00:27:15.657 [2024-07-12 19:19:18.085864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.085882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.094303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190eaef0 00:27:15.657 [2024-07-12 19:19:18.095011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.095029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.103380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e27f0 00:27:15.657 [2024-07-12 19:19:18.104109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.104127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.112479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190df118 00:27:15.657 [2024-07-12 19:19:18.113210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.113231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.121587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e0630 00:27:15.657 [2024-07-12 19:19:18.122301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.122318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.130640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f2510 00:27:15.657 [2024-07-12 19:19:18.131354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.131372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.139773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e12d8 00:27:15.657 [2024-07-12 19:19:18.140510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.140534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.148865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f96f8 00:27:15.657 [2024-07-12 19:19:18.149586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.149603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.157942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ee190 00:27:15.657 [2024-07-12 19:19:18.158658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.158676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.167198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ef270 00:27:15.657 [2024-07-12 19:19:18.167911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.167929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.176289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f3a28 00:27:15.657 [2024-07-12 19:19:18.176998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.177015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.185465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ec408 00:27:15.657 [2024-07-12 19:19:18.186208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.186229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.194652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e9168 00:27:15.657 [2024-07-12 19:19:18.195393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.195411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.203714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e8088 00:27:15.657 [2024-07-12 19:19:18.204432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.204450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.212828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fb048 00:27:15.657 [2024-07-12 19:19:18.213549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.657 [2024-07-12 19:19:18.213567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.657 [2024-07-12 19:19:18.221950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ff3c8 00:27:15.917 [2024-07-12 19:19:18.222692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.917 [2024-07-12 19:19:18.222715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.917 [2024-07-12 19:19:18.231083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190de038 00:27:15.917 [2024-07-12 19:19:18.231815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.917 [2024-07-12 19:19:18.231833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.917 [2024-07-12 19:19:18.240195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fe2e8 00:27:15.917 [2024-07-12 19:19:18.240908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.917 [2024-07-12 19:19:18.240926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.917 [2024-07-12 19:19:18.249248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e3498 00:27:15.917 [2024-07-12 19:19:18.249979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.917 [2024-07-12 19:19:18.249997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.917 [2024-07-12 19:19:18.258344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190de470 00:27:15.917 [2024-07-12 19:19:18.259073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.917 [2024-07-12 19:19:18.259091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.917 [2024-07-12 19:19:18.267644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e5ec8 00:27:15.917 [2024-07-12 19:19:18.268358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.917 [2024-07-12 19:19:18.268375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.917 [2024-07-12 19:19:18.276730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e01f8 00:27:15.917 [2024-07-12 19:19:18.277466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.917 [2024-07-12 19:19:18.277483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.917 [2024-07-12 19:19:18.285842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e1710 00:27:15.917 [2024-07-12 19:19:18.286559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.917 [2024-07-12 19:19:18.286577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.917 [2024-07-12 19:19:18.294950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f9b30 00:27:15.917 [2024-07-12 19:19:18.295669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.917 [2024-07-12 19:19:18.295687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.917 [2024-07-12 19:19:18.303987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f8a50 00:27:15.917 [2024-07-12 19:19:18.304729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.917 [2024-07-12 19:19:18.304748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.313272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e95a0 00:27:15.918 [2024-07-12 19:19:18.313928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.313948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.322557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ee5c8 00:27:15.918 [2024-07-12 19:19:18.323184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.323203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.332779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e9e10 00:27:15.918 [2024-07-12 19:19:18.333881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.333900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.342465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f8a50 00:27:15.918 [2024-07-12 19:19:18.343712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.343730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.352170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e27f0 00:27:15.918 [2024-07-12 19:19:18.353542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.353561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.360128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e7c50 00:27:15.918 [2024-07-12 19:19:18.360978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.360996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.369208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e5658 00:27:15.918 [2024-07-12 19:19:18.370046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.370064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.378188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fd208 00:27:15.918 [2024-07-12 19:19:18.379022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.379040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.387279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e73e0 00:27:15.918 [2024-07-12 19:19:18.388115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.388133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.395767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fac10 00:27:15.918 [2024-07-12 19:19:18.396592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.396610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.404965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e88f8 00:27:15.918 [2024-07-12 19:19:18.405780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.405798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.415211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190dfdc0 00:27:15.918 [2024-07-12 19:19:18.416148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.416168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.424252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190edd58 00:27:15.918 [2024-07-12 19:19:18.425178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.425197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.433297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e0630 00:27:15.918 [2024-07-12 19:19:18.434213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.434237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.442246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ebb98 00:27:15.918 [2024-07-12 19:19:18.443162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.443180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.450660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e23b8 00:27:15.918 [2024-07-12 19:19:18.451561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.451579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.460192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f0ff8 00:27:15.918 [2024-07-12 19:19:18.461214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.461240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.469718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fe720 00:27:15.918 [2024-07-12 19:19:18.470872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.470890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:15.918 [2024-07-12 19:19:18.479196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e88f8 00:27:15.918 [2024-07-12 19:19:18.480474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.918 [2024-07-12 19:19:18.480492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.178 [2024-07-12 19:19:18.488659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f2510 00:27:16.178 [2024-07-12 19:19:18.489920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.178 [2024-07-12 19:19:18.489938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:16.178 [2024-07-12 19:19:18.496616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190df550 00:27:16.178 [2024-07-12 19:19:18.497884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.178 [2024-07-12 19:19:18.497903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.178 [2024-07-12 19:19:18.504381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e6738 00:27:16.178 [2024-07-12 19:19:18.505039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.178 [2024-07-12 19:19:18.505056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:16.178 [2024-07-12 19:19:18.513457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fe2e8 00:27:16.178 [2024-07-12 19:19:18.514101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.178 [2024-07-12 19:19:18.514119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:16.178 [2024-07-12 19:19:18.522470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e6b70 00:27:16.178 [2024-07-12 19:19:18.523114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.178 [2024-07-12 19:19:18.523131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:16.178 [2024-07-12 19:19:18.531671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f0350 00:27:16.178 [2024-07-12 19:19:18.532310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.178 [2024-07-12 19:19:18.532328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.178 [2024-07-12 19:19:18.542100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190df118 00:27:16.178 [2024-07-12 19:19:18.542986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.178 [2024-07-12 19:19:18.543004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.178 [2024-07-12 19:19:18.551151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190df118 00:27:16.178 [2024-07-12 19:19:18.552047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.178 [2024-07-12 19:19:18.552065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.178 [2024-07-12 19:19:18.560240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190df118 00:27:16.178 [2024-07-12 19:19:18.561161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.178 [2024-07-12 19:19:18.561179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.178 [2024-07-12 19:19:18.569385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190df118 00:27:16.178 [2024-07-12 19:19:18.570302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.178 [2024-07-12 19:19:18.570320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.178 [2024-07-12 19:19:18.578637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f8a50 00:27:16.178 [2024-07-12 19:19:18.579521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.178 [2024-07-12 19:19:18.579538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:16.178 [2024-07-12 19:19:18.587930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f7da8 00:27:16.178 [2024-07-12 19:19:18.588805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.178 [2024-07-12 19:19:18.588823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:16.178 [2024-07-12 19:19:18.596279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190efae0 00:27:16.178 [2024-07-12 19:19:18.597142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.178 [2024-07-12 19:19:18.597160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:16.178 [2024-07-12 19:19:18.605978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fc128 00:27:16.178 [2024-07-12 19:19:18.606964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.178 [2024-07-12 19:19:18.606981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.615670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fc128 00:27:16.179 [2024-07-12 19:19:18.616658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.616677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.624099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ecc78 00:27:16.179 [2024-07-12 19:19:18.625085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.625104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.633700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e3498 00:27:16.179 [2024-07-12 19:19:18.634797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.634814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.641609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fef90 00:27:16.179 [2024-07-12 19:19:18.642220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.642242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.650610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e9e10 00:27:16.179 [2024-07-12 19:19:18.651222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.651245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.659727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e9e10 00:27:16.179 [2024-07-12 19:19:18.660335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.660353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.668799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e9e10 00:27:16.179 [2024-07-12 19:19:18.669406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.669424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.677815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f9f68 00:27:16.179 [2024-07-12 19:19:18.678421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.678439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.686941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e95a0 00:27:16.179 [2024-07-12 19:19:18.687541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.687559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.695716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f1430 00:27:16.179 [2024-07-12 19:19:18.696306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.696327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.704447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190eff18 00:27:16.179 [2024-07-12 19:19:18.705014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.705032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.713530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fd208 00:27:16.179 [2024-07-12 19:19:18.714100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.714118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.723004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f3e60 00:27:16.179 [2024-07-12 19:19:18.723685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.723704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.733836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f3e60 00:27:16.179 [2024-07-12 19:19:18.734994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.735012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:16.179 [2024-07-12 19:19:18.742889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fac10 00:27:16.179 [2024-07-12 19:19:18.744047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.179 [2024-07-12 19:19:18.744065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:16.438 [2024-07-12 19:19:18.751119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e6fa8 00:27:16.438 [2024-07-12 19:19:18.752334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.438 [2024-07-12 19:19:18.752351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:16.438 [2024-07-12 19:19:18.758947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e23b8 00:27:16.438 [2024-07-12 19:19:18.759588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.438 [2024-07-12 19:19:18.759606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.438 [2024-07-12 19:19:18.768465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190eaef0 00:27:16.438 [2024-07-12 19:19:18.769215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.769237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.777951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fb480 00:27:16.439 [2024-07-12 19:19:18.778835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.778853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.787568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f8618 00:27:16.439 [2024-07-12 19:19:18.788566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.788584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.795999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ec840 00:27:16.439 [2024-07-12 19:19:18.796651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.796669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.804985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f92c0 00:27:16.439 [2024-07-12 19:19:18.805649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.805666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.814365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e5220 00:27:16.439 [2024-07-12 19:19:18.814800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.814819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.824926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e5a90 00:27:16.439 [2024-07-12 19:19:18.826223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.826244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.833670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ebfd0 00:27:16.439 [2024-07-12 19:19:18.834578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.834596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.842924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fe2e8 00:27:16.439 [2024-07-12 19:19:18.843603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.843621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.853370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f9b30 00:27:16.439 [2024-07-12 19:19:18.854856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.854873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.859818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fe2e8 00:27:16.439 [2024-07-12 19:19:18.860462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.860480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.868423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190df550 00:27:16.439 [2024-07-12 19:19:18.869054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.869071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.877700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f0350 00:27:16.439 [2024-07-12 19:19:18.878350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.878378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.887285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e0a68 00:27:16.439 [2024-07-12 19:19:18.887934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.887952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.896324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190dfdc0 00:27:16.439 [2024-07-12 19:19:18.897079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.897096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.905844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e5ec8 00:27:16.439 [2024-07-12 19:19:18.906718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.906735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.915379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190eb760 00:27:16.439 [2024-07-12 19:19:18.916375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.916394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.924852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ee190 00:27:16.439 [2024-07-12 19:19:18.925979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.925997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.934579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ebfd0 00:27:16.439 [2024-07-12 19:19:18.935815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.935832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.944076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190de038 00:27:16.439 [2024-07-12 19:19:18.945440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.945458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.953588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f8618 00:27:16.439 [2024-07-12 19:19:18.955071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.955089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.960033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e9e10 00:27:16.439 [2024-07-12 19:19:18.960689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.960717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.970362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ddc00 00:27:16.439 [2024-07-12 19:19:18.971486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.971503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.979883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fd208 00:27:16.439 [2024-07-12 19:19:18.981108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.981126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.989605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fe720 00:27:16.439 [2024-07-12 19:19:18.990954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:18.990972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:16.439 [2024-07-12 19:19:18.999096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f5be8 00:27:16.439 [2024-07-12 19:19:19.000598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.439 [2024-07-12 19:19:19.000616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:16.700 [2024-07-12 19:19:19.005551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e0a68 00:27:16.700 [2024-07-12 19:19:19.006126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.700 [2024-07-12 19:19:19.006145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.700 [2024-07-12 19:19:19.014773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190dfdc0 00:27:16.700 [2024-07-12 19:19:19.015520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.700 [2024-07-12 19:19:19.015552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.024268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f2510 00:27:16.701 [2024-07-12 19:19:19.025127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.025144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.033811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fa7d8 00:27:16.701 [2024-07-12 19:19:19.034796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.034814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.043285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fd640 00:27:16.701 [2024-07-12 19:19:19.044400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.044418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.052970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190de8a8 00:27:16.701 [2024-07-12 19:19:19.054202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.054220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.062513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f7da8 00:27:16.701 [2024-07-12 19:19:19.063867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.063884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.071958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fbcf0 00:27:16.701 [2024-07-12 19:19:19.073437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.073455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.078394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190dece0 00:27:16.701 [2024-07-12 19:19:19.079047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.079065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.089274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ef6a8 00:27:16.701 [2024-07-12 19:19:19.090351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.090369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.098371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190df550 00:27:16.701 [2024-07-12 19:19:19.099394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.099412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.108799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190de470 00:27:16.701 [2024-07-12 19:19:19.110278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.110295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.115193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e4de8 00:27:16.701 [2024-07-12 19:19:19.115831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.115849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.125535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ebb98 00:27:16.701 [2024-07-12 19:19:19.126212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.126235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.136466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f6cc8 00:27:16.701 [2024-07-12 19:19:19.138058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.138076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.142873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fef90 00:27:16.701 [2024-07-12 19:19:19.143635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.143652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.151508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e5ec8 00:27:16.701 [2024-07-12 19:19:19.152238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.152255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.161649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ef270 00:27:16.701 [2024-07-12 19:19:19.162433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.162451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.171084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fdeb0 00:27:16.701 [2024-07-12 19:19:19.172069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.172087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.179741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e7c50 00:27:16.701 [2024-07-12 19:19:19.180725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.180742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.189319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190eb328 00:27:16.701 [2024-07-12 19:19:19.190420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.190437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.198806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190fb048 00:27:16.701 [2024-07-12 19:19:19.200028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.200045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.207278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ed4e8 00:27:16.701 [2024-07-12 19:19:19.208053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.208071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.216500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190de470 00:27:16.701 [2024-07-12 19:19:19.217487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.217505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.225122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f7538 00:27:16.701 [2024-07-12 19:19:19.226100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.226117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.234671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ed4e8 00:27:16.701 [2024-07-12 19:19:19.235769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.235786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.244144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f2510 00:27:16.701 [2024-07-12 19:19:19.245357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.245374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.253681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f7100 00:27:16.701 [2024-07-12 19:19:19.255020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.255041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:16.701 [2024-07-12 19:19:19.262946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190df988 00:27:16.701 [2024-07-12 19:19:19.264279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.701 [2024-07-12 19:19:19.264297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.270679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f5378 00:27:16.961 [2024-07-12 19:19:19.271240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.271258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.280204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190de470 00:27:16.961 [2024-07-12 19:19:19.280842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.280861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.289736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e4140 00:27:16.961 [2024-07-12 19:19:19.290495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.290513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.298297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e6738 00:27:16.961 [2024-07-12 19:19:19.299611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.299629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.306128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e9e10 00:27:16.961 [2024-07-12 19:19:19.306851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.306876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.315621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ee190 00:27:16.961 [2024-07-12 19:19:19.316446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.316464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.325126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190de470 00:27:16.961 [2024-07-12 19:19:19.326097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.326115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.334635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ddc00 00:27:16.961 [2024-07-12 19:19:19.335783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.335801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.344369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ee5c8 00:27:16.961 [2024-07-12 19:19:19.345579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.345596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.353949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ee190 00:27:16.961 [2024-07-12 19:19:19.355321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.355339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.363487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f0788 00:27:16.961 [2024-07-12 19:19:19.364932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.364949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.372994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e6738 00:27:16.961 [2024-07-12 19:19:19.374557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.374575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.379458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f35f0 00:27:16.961 [2024-07-12 19:19:19.380165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.380182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.388954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f6020 00:27:16.961 [2024-07-12 19:19:19.389819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.389837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.399336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f5be8 00:27:16.961 [2024-07-12 19:19:19.400668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.400685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:16.961 [2024-07-12 19:19:19.408853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e7c50 00:27:16.961 [2024-07-12 19:19:19.410296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.961 [2024-07-12 19:19:19.410318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.962 [2024-07-12 19:19:19.417425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f2948 00:27:16.962 [2024-07-12 19:19:19.418421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.962 [2024-07-12 19:19:19.418439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:16.962 [2024-07-12 19:19:19.425762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e7818 00:27:16.962 [2024-07-12 19:19:19.427078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.962 [2024-07-12 19:19:19.427096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.962 [2024-07-12 19:19:19.433613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ee190 00:27:16.962 [2024-07-12 19:19:19.434269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.962 [2024-07-12 19:19:19.434286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:16.962 [2024-07-12 19:19:19.444234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190ebb98 00:27:16.962 [2024-07-12 19:19:19.445115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.962 [2024-07-12 19:19:19.445134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:16.962 [2024-07-12 19:19:19.453452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e3498 00:27:16.962 [2024-07-12 19:19:19.454331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.962 [2024-07-12 19:19:19.454348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:16.962 [2024-07-12 19:19:19.463591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e3498 00:27:16.962 [2024-07-12 19:19:19.465002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.962 [2024-07-12 19:19:19.465019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:16.962 [2024-07-12 19:19:19.473119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f9f68 00:27:16.962 [2024-07-12 19:19:19.474679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.962 [2024-07-12 19:19:19.474696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.962 [2024-07-12 19:19:19.479579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190eb760 00:27:16.962 [2024-07-12 19:19:19.480270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.962 [2024-07-12 19:19:19.480288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:16.962 [2024-07-12 19:19:19.489074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e84c0 00:27:16.962 [2024-07-12 19:19:19.489912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.962 [2024-07-12 19:19:19.489938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:16.962 [2024-07-12 19:19:19.497707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e0a68 00:27:16.962 [2024-07-12 19:19:19.498537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.962 [2024-07-12 19:19:19.498554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:16.962 [2024-07-12 19:19:19.507250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f35f0 00:27:16.962 [2024-07-12 19:19:19.508191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.962 [2024-07-12 19:19:19.508209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:16.962 [2024-07-12 19:19:19.516732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190f3e60 00:27:16.962 [2024-07-12 19:19:19.517814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.962 [2024-07-12 19:19:19.517833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:16.962 [2024-07-12 19:19:19.525206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e95a0 00:27:16.962 [2024-07-12 19:19:19.525827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.962 [2024-07-12 19:19:19.525845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:17.221 [2024-07-12 19:19:19.534521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e5220 00:27:17.221 [2024-07-12 19:19:19.535021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.221 [2024-07-12 19:19:19.535039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.221 [2024-07-12 19:19:19.544032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa424d0) with pdu=0x2000190e84c0 00:27:17.221 [2024-07-12 19:19:19.544762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.221 [2024-07-12 19:19:19.544780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:17.221 00:27:17.221 Latency(us) 00:27:17.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.221 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:17.221 nvme0n1 : 2.00 28016.82 109.44 0.00 0.00 4562.43 1602.78 11739.49 00:27:17.221 =================================================================================================================== 00:27:17.221 Total : 28016.82 109.44 0.00 0.00 4562.43 1602.78 11739.49 00:27:17.221 0 00:27:17.221 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:17.221 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:17.221 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:17.221 | .driver_specific 00:27:17.221 | .nvme_error 00:27:17.221 | .status_code 00:27:17.221 | .command_transient_transport_error' 00:27:17.221 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:17.221 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:27:17.221 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 453388 00:27:17.221 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 453388 ']' 00:27:17.221 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 453388 00:27:17.221 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:17.221 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:17.221 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 453388 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 453388' 00:27:17.480 killing process with pid 453388 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 453388 00:27:17.480 Received shutdown signal, test time was about 2.000000 seconds 00:27:17.480 00:27:17.480 Latency(us) 00:27:17.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.480 =================================================================================================================== 00:27:17.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 453388 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=453990 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 453990 /var/tmp/bperf.sock 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 453990 ']' 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:17.480 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:17.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:17.481 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:17.481 19:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:17.481 [2024-07-12 19:19:20.020609] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:27:17.481 [2024-07-12 19:19:20.020661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453990 ] 00:27:17.481 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:17.481 Zero copy mechanism will not be used. 00:27:17.481 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.739 [2024-07-12 19:19:20.091994] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.739 [2024-07-12 19:19:20.164564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.306 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:18.306 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:18.306 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:18.306 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:18.564 19:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:18.564 19:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.564 19:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.564 19:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.564 19:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:18.564 19:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:18.823 nvme0n1 00:27:18.823 19:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:18.823 19:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.823 19:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.823 19:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.823 19:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:18.823 19:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:19.084 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:19.084 Zero copy mechanism will not be used. 00:27:19.084 Running I/O for 2 seconds... 00:27:19.084 [2024-07-12 19:19:21.414223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.414697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.414726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.418922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.419299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.419324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.423546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.423941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.423964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.428119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.428499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.428521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.432696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.433066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.433088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.437753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.438131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.438150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.442688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.443064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.443086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.447833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.448210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.448236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.453488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.453890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.453910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.459634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.460018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.460039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.465547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.465921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.465941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.471736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.472095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.472115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.477671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.478051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.478071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.483803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.484161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.484181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.489429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.489781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.084 [2024-07-12 19:19:21.489800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.084 [2024-07-12 19:19:21.495551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.084 [2024-07-12 19:19:21.495926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.495945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.501461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.501830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.501850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.507797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.508192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.508211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.513429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.513801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.513821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.519546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.519935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.519955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.525786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.526153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.526176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.531534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.531895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.531914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.537523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.537584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.537601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.543760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.544144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.544163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.549579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.549942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.549961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.554936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.555316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.555335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.560365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.560739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.560757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.566757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.566972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.566990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.572287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.572650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.572668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.577017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.577337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.577356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.580766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.581032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.581050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.584348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.584621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.584639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.587938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.588201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.588220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.591968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.592240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.592258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.596255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.596521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.596539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.601099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.601367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.601386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.605298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.605566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.605585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.609262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.609535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.609553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.612936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.613199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.613217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.616542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.616800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.616819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.620140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.620413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.620431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.623957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.624223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.624246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.628267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.628517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.628535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.632760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.633026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.085 [2024-07-12 19:19:21.633044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.085 [2024-07-12 19:19:21.636978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.085 [2024-07-12 19:19:21.637253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.086 [2024-07-12 19:19:21.637271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.086 [2024-07-12 19:19:21.640922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.086 [2024-07-12 19:19:21.641184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.086 [2024-07-12 19:19:21.641202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.086 [2024-07-12 19:19:21.644839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.086 [2024-07-12 19:19:21.645116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.086 [2024-07-12 19:19:21.645138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.086 [2024-07-12 19:19:21.648666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.086 [2024-07-12 19:19:21.648937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.086 [2024-07-12 19:19:21.648955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.652344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.652616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.652635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.655936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.656202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.656220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.659612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.659867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.659885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.663211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.663486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.663504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.666818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.667103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.667122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.670515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.670773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.670791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.674349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.674623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.674642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.678504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.678773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.678792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.682300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.682568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.682586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.686858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.687110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.687128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.691373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.691649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.691667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.695443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.695750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.695768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.700314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.700671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.700690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.705960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.706245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.706264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.711471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.711699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.711717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.717215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.717480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.717502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.722734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.722999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.723017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.728478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.728717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.728735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.732816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.733071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.733090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.736735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.736988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.737006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.740557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.740819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.740837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.347 [2024-07-12 19:19:21.744394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.347 [2024-07-12 19:19:21.744632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.347 [2024-07-12 19:19:21.744650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.748254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.748512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.748530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.752156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.752404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.752423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.756149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.756415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.756433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.760451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.760692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.760710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.764425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.764679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.764698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.768321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.768540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.768558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.772178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.772416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.772434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.775966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.776176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.776194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.779858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.780073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.780091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.783670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.783898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.783916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.787455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.787683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.787701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.791331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.791552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.791570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.795272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.795501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.795519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.798978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.799184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.799202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.802657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.802886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.802905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.806545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.806757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.806775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.810580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.810801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.810819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.814408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.814637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.814655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.818326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.818537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.818555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.822259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.822467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.822487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.826127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.826346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.826364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.829932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.830159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.830177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.833753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.833974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.833992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.837714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.837943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.837961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.841447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.841682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.841700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.845450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.845672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.845690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.850480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.850725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.850743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.854431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.854653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.854672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.858165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.348 [2024-07-12 19:19:21.858408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.348 [2024-07-12 19:19:21.858426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.348 [2024-07-12 19:19:21.862013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.349 [2024-07-12 19:19:21.862222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.349 [2024-07-12 19:19:21.862245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.349 [2024-07-12 19:19:21.865845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.349 [2024-07-12 19:19:21.866053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.349 [2024-07-12 19:19:21.866071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.349 [2024-07-12 19:19:21.869746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.349 [2024-07-12 19:19:21.869964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.349 [2024-07-12 19:19:21.869982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.349 [2024-07-12 19:19:21.873669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.349 [2024-07-12 19:19:21.873882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.349 [2024-07-12 19:19:21.873900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.349 [2024-07-12 19:19:21.877428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.349 [2024-07-12 19:19:21.877651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.349 [2024-07-12 19:19:21.877669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.349 [2024-07-12 19:19:21.881134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.349 [2024-07-12 19:19:21.881353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.349 [2024-07-12 19:19:21.881371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.349 [2024-07-12 19:19:21.884972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.349 [2024-07-12 19:19:21.885176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.349 [2024-07-12 19:19:21.885194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.349 [2024-07-12 19:19:21.888879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.349 [2024-07-12 19:19:21.889093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.349 [2024-07-12 19:19:21.889111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.349 [2024-07-12 19:19:21.892875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.349 [2024-07-12 19:19:21.893109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.349 [2024-07-12 19:19:21.893127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.349 [2024-07-12 19:19:21.896846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.349 [2024-07-12 19:19:21.897083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.349 [2024-07-12 19:19:21.897101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.349 [2024-07-12 19:19:21.900720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.349 [2024-07-12 19:19:21.900929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.349 [2024-07-12 19:19:21.900947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.349 [2024-07-12 19:19:21.904635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.349 [2024-07-12 19:19:21.904872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.349 [2024-07-12 19:19:21.904890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.349 [2024-07-12 19:19:21.908914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.349 [2024-07-12 19:19:21.909126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.349 [2024-07-12 19:19:21.909145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.610 [2024-07-12 19:19:21.912784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.610 [2024-07-12 19:19:21.912986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.610 [2024-07-12 19:19:21.913005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.610 [2024-07-12 19:19:21.916780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.610 [2024-07-12 19:19:21.917017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.610 [2024-07-12 19:19:21.917034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.610 [2024-07-12 19:19:21.920603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.610 [2024-07-12 19:19:21.920847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.610 [2024-07-12 19:19:21.920865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.610 [2024-07-12 19:19:21.924713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.610 [2024-07-12 19:19:21.925128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.610 [2024-07-12 19:19:21.925150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.610 [2024-07-12 19:19:21.928895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.610 [2024-07-12 19:19:21.929103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.610 [2024-07-12 19:19:21.929121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.610 [2024-07-12 19:19:21.932859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.610 [2024-07-12 19:19:21.933094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.933112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.937021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.937272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.937290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.941514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.941733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.941751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.945122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.945343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.945361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.948679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.948899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.948917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.952280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.952520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.952538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.955840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.956069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.956088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.959583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.959818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.959836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.963488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.963703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.963721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.968045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.968271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.968289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.972280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.972499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.972517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.976357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.976575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.976593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.980744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.980960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.980978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.984648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.984865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.984885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.988580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.988796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.988814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.992607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.992819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.992838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:21.996458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:21.996660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:21.996679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:22.000342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:22.000579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:22.000598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:22.004262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:22.004480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:22.004499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:22.008130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:22.008363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:22.008382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:22.012092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:22.012305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:22.012325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:22.015981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:22.016193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.611 [2024-07-12 19:19:22.016212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.611 [2024-07-12 19:19:22.019848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.611 [2024-07-12 19:19:22.020052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.020071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.023625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.023857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.023876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.027469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.027683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.027708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.031865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.032082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.032100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.036395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.036622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.036640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.040549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.040761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.040779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.045006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.045232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.045251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.049893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.050131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.050149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.053913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.054140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.054158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.057492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.057703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.057721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.061087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.061306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.061324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.064707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.064933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.064952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.068275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.068511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.068530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.072307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.072523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.072540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.076357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.076583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.076602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.080922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.081152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.081171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.085064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.085288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.085306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.089003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.089213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.089237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.092811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.093036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.093054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.096633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.096840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.096859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.101026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.101261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.101279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.106010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.612 [2024-07-12 19:19:22.106251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.612 [2024-07-12 19:19:22.106269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.612 [2024-07-12 19:19:22.109990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.110236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.110254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.113893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.114110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.114129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.117772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.118013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.118032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.121605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.121812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.121830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.125492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.125714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.125733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.129394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.129613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.129632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.133280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.133491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.133513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.137182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.137400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.137419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.141273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.141503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.141521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.145121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.145351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.145370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.149142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.149378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.149396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.153034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.153249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.153266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.156886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.157103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.157120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.160652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.160888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.160906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.164525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.164737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.164754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.168395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.168623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.168657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.613 [2024-07-12 19:19:22.172393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.613 [2024-07-12 19:19:22.172632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.613 [2024-07-12 19:19:22.172650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.176289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.176507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.176525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.180276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.180498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.180516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.184281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.184494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.184511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.188545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.188751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.188770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.192912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.193170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.193188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.198322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.198675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.198693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.203577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.203721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.203739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.209383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.209570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.209588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.215495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.215613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.215631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.220776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.220854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.220872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.226834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.227021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.227039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.232459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.232692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.232710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.238005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.238158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.238177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.243942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.244071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.244090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.249540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.249711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.249729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.255567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.255713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.255734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.261534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.261587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.261606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.265878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.265946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.265964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.874 [2024-07-12 19:19:22.269833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.874 [2024-07-12 19:19:22.269902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-07-12 19:19:22.269920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.273527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.273583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.273601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.277216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.277321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.277340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.281347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.281491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.281509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.286511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.286698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.286716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.291598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.291770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.291789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.297024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.297186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.297204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.303126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.303277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.303295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.309784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.309862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.309879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.315349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.315546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.315565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.321313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.321442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.321461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.326571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.326651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.326668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.331858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.331983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.332002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.337147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.337327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.337345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.342626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.342816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.342834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.347927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.348090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.348109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.353392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.353554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.353572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.358858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.358992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.359011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.364494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.364655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.364673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.371338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.371527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.371546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.376409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.376468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.376485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.380449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.380507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.380524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.384389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.384490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.384508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.388151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.388208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.388234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.392097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.392190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.392209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.396400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.396453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.396471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.401064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.401118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.401135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.405644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.405707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.405724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.409709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.409762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.409779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.875 [2024-07-12 19:19:22.413577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.875 [2024-07-12 19:19:22.413630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-07-12 19:19:22.413647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.876 [2024-07-12 19:19:22.417739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.876 [2024-07-12 19:19:22.417797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-07-12 19:19:22.417814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.876 [2024-07-12 19:19:22.421730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.876 [2024-07-12 19:19:22.421802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-07-12 19:19:22.421835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.876 [2024-07-12 19:19:22.425682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.876 [2024-07-12 19:19:22.425738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-07-12 19:19:22.425756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.876 [2024-07-12 19:19:22.429636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.876 [2024-07-12 19:19:22.429705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-07-12 19:19:22.429723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.876 [2024-07-12 19:19:22.433498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.876 [2024-07-12 19:19:22.433627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-07-12 19:19:22.433645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.876 [2024-07-12 19:19:22.438021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:19.876 [2024-07-12 19:19:22.438106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-07-12 19:19:22.438125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.136 [2024-07-12 19:19:22.441832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.136 [2024-07-12 19:19:22.441886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.136 [2024-07-12 19:19:22.441903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.136 [2024-07-12 19:19:22.445469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.136 [2024-07-12 19:19:22.445523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.136 [2024-07-12 19:19:22.445540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.136 [2024-07-12 19:19:22.449123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.136 [2024-07-12 19:19:22.449177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.136 [2024-07-12 19:19:22.449194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.136 [2024-07-12 19:19:22.452803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.136 [2024-07-12 19:19:22.452872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.136 [2024-07-12 19:19:22.452889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.136 [2024-07-12 19:19:22.456465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.136 [2024-07-12 19:19:22.456530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.136 [2024-07-12 19:19:22.456550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.136 [2024-07-12 19:19:22.460181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.136 [2024-07-12 19:19:22.460240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.136 [2024-07-12 19:19:22.460257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.136 [2024-07-12 19:19:22.464244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.136 [2024-07-12 19:19:22.464302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.136 [2024-07-12 19:19:22.464319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.136 [2024-07-12 19:19:22.468776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.136 [2024-07-12 19:19:22.468838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.136 [2024-07-12 19:19:22.468854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.136 [2024-07-12 19:19:22.472982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.136 [2024-07-12 19:19:22.473050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.136 [2024-07-12 19:19:22.473067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.476812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.476906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.476925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.480761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.480817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.480834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.484573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.484656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.484675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.488335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.488395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.488412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.492129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.492189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.492207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.496762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.496817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.496834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.501826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.501892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.501909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.505663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.505772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.505790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.509548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.509598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.509616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.513315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.513370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.513387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.517025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.517093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.517110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.521295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.521419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.521437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.526404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.526456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.526473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.530711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.530785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.530803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.534705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.534770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.534787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.538629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.538703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.538720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.542469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.542541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.542558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.547030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.547115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.547133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.551795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.551910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.551929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.556595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.556732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.556750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.561810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.561941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.561959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.566962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.567150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.137 [2024-07-12 19:19:22.567171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.137 [2024-07-12 19:19:22.572099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.137 [2024-07-12 19:19:22.572233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.572251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.577284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.577445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.577463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.582719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.582896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.582914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.588258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.588433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.588451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.593978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.594158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.594176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.599187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.599366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.599385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.604322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.604470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.604488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.609585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.609663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.609682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.614706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.614860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.614879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.619928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.620078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.620096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.625144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.625303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.625322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.630707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.630862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.630881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.636734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.636894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.636911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.641959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.642088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.642107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.646667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.646796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.646815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.650609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.650707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.650725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.654719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.654806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.654824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.658837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.658959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.658977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.662903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.662997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.663015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.666947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.667040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.667057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.671089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.671171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.671188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.674942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.675000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.138 [2024-07-12 19:19:22.675017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.138 [2024-07-12 19:19:22.678898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.138 [2024-07-12 19:19:22.678982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.139 [2024-07-12 19:19:22.679001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.139 [2024-07-12 19:19:22.683663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.139 [2024-07-12 19:19:22.683813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.139 [2024-07-12 19:19:22.683832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.139 [2024-07-12 19:19:22.688740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.139 [2024-07-12 19:19:22.688904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.139 [2024-07-12 19:19:22.688923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.139 [2024-07-12 19:19:22.693831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.139 [2024-07-12 19:19:22.693977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.139 [2024-07-12 19:19:22.693999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.139 [2024-07-12 19:19:22.698886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.139 [2024-07-12 19:19:22.699059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.139 [2024-07-12 19:19:22.699077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.399 [2024-07-12 19:19:22.703971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.399 [2024-07-12 19:19:22.704135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.399 [2024-07-12 19:19:22.704154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.399 [2024-07-12 19:19:22.709115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.399 [2024-07-12 19:19:22.709241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.399 [2024-07-12 19:19:22.709260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.399 [2024-07-12 19:19:22.714241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.399 [2024-07-12 19:19:22.714425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.399 [2024-07-12 19:19:22.714443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.399 [2024-07-12 19:19:22.719962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.399 [2024-07-12 19:19:22.720058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.399 [2024-07-12 19:19:22.720077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.399 [2024-07-12 19:19:22.724348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.399 [2024-07-12 19:19:22.724482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.399 [2024-07-12 19:19:22.724500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.399 [2024-07-12 19:19:22.728236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.399 [2024-07-12 19:19:22.728323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.399 [2024-07-12 19:19:22.728341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.399 [2024-07-12 19:19:22.732526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.399 [2024-07-12 19:19:22.732633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.399 [2024-07-12 19:19:22.732652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.399 [2024-07-12 19:19:22.736610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.399 [2024-07-12 19:19:22.736727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.399 [2024-07-12 19:19:22.736746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.399 [2024-07-12 19:19:22.740662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.399 [2024-07-12 19:19:22.740811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.399 [2024-07-12 19:19:22.740829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.399 [2024-07-12 19:19:22.744786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.399 [2024-07-12 19:19:22.744867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.399 [2024-07-12 19:19:22.744885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.399 [2024-07-12 19:19:22.748848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.399 [2024-07-12 19:19:22.748987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.399 [2024-07-12 19:19:22.749005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.399 [2024-07-12 19:19:22.753020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.399 [2024-07-12 19:19:22.753095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.399 [2024-07-12 19:19:22.753113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.399 [2024-07-12 19:19:22.757234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.757342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.757360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.761461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.761521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.761538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.765704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.765790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.765808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.769937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.770010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.770027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.774018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.774099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.774117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.778030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.778110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.778128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.782743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.782865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.782883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.787235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.787333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.787352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.792534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.792678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.792697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.798279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.798386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.798404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.804433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.804522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.804539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.810526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.810641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.810660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.817771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.817854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.817875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.824433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.824572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.824590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.831763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.831951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.831969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.838830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.838974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.838993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.845722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.845824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.845842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.852638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.852815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.852834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.859803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.859921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.859940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.865454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.865531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.865550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.870336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.870394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.870411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.874974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.875035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.875052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.879494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.879548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.879565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.884123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.884177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.884194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.888721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.888777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.888795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.893474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.893536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.893554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.898130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.898216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.898240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.902294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.902365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.902382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.906141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.906219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.400 [2024-07-12 19:19:22.906243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.400 [2024-07-12 19:19:22.910000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.400 [2024-07-12 19:19:22.910050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.401 [2024-07-12 19:19:22.910067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.401 [2024-07-12 19:19:22.914057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.401 [2024-07-12 19:19:22.914113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.401 [2024-07-12 19:19:22.914130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.401 [2024-07-12 19:19:22.917966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.401 [2024-07-12 19:19:22.918046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.401 [2024-07-12 19:19:22.918064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.401 [2024-07-12 19:19:22.921921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.401 [2024-07-12 19:19:22.921997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.401 [2024-07-12 19:19:22.922015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.401 [2024-07-12 19:19:22.925882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.401 [2024-07-12 19:19:22.925936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.401 [2024-07-12 19:19:22.925953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.401 [2024-07-12 19:19:22.929904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.401 [2024-07-12 19:19:22.929959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.401 [2024-07-12 19:19:22.929976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.401 [2024-07-12 19:19:22.933839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.401 [2024-07-12 19:19:22.933896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.401 [2024-07-12 19:19:22.933914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.401 [2024-07-12 19:19:22.937918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.401 [2024-07-12 19:19:22.937994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.401 [2024-07-12 19:19:22.938015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.401 [2024-07-12 19:19:22.941948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.401 [2024-07-12 19:19:22.942017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.401 [2024-07-12 19:19:22.942034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.401 [2024-07-12 19:19:22.945974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.401 [2024-07-12 19:19:22.946025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.401 [2024-07-12 19:19:22.946046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.401 [2024-07-12 19:19:22.950022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.401 [2024-07-12 19:19:22.950075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.401 [2024-07-12 19:19:22.950092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.401 [2024-07-12 19:19:22.953928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.401 [2024-07-12 19:19:22.953991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.401 [2024-07-12 19:19:22.954008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.401 [2024-07-12 19:19:22.957863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.401 [2024-07-12 19:19:22.957919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.401 [2024-07-12 19:19:22.957936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.401 [2024-07-12 19:19:22.961714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.401 [2024-07-12 19:19:22.961770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.401 [2024-07-12 19:19:22.961787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:22.965582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:22.965674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:22.965693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:22.969864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:22.969954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:22.969974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:22.974028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:22.974081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:22.974099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:22.977891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:22.977975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:22.977994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:22.981859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:22.981914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:22.981931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:22.985843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:22.985900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:22.985917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:22.989756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:22.989847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:22.989866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:22.993636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:22.993696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:22.993714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:22.997351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:22.997411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:22.997427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.001609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.001692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.001710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.006237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.006308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.006325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.010286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.010346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.010363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.014189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.014263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.014280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.018414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.018474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.018491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.023730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.023897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.023915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.029701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.029824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.029843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.035221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.035367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.035386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.040628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.040820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.040838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.046184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.046314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.046332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.051753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.051903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.051921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.056901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.057063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.057081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.062515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.062653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.062675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.067966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.068136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.068154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.073177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.073325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.073342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.078350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.078529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.078547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.083740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.083890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.083909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.088970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.089086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.089105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.094497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.094625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.094644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.099914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.100104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.100123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.105293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.662 [2024-07-12 19:19:23.105447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.662 [2024-07-12 19:19:23.105465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.662 [2024-07-12 19:19:23.111332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.111496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.111515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.116516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.116706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.116725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.121857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.122041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.122059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.127254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.127376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.127395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.132531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.132718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.132737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.137827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.137995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.138013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.143032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.143190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.143208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.148200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.148389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.148407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.153494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.153659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.153677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.159248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.159400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.159417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.164651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.164832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.164850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.169834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.169956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.169974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.175314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.175502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.175520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.180550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.180719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.180738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.185842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.185931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.185949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.191430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.191624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.191642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.196702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.196856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.196875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.202144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.202288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.202309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.208041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.208231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.208250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.213545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.213694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.213712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.218815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.219002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.219020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.663 [2024-07-12 19:19:23.224185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.663 [2024-07-12 19:19:23.224362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.663 [2024-07-12 19:19:23.224381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.229415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.229569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.229587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.234600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.234748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.234766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.240258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.240404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.240423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.245462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.245609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.245627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.250938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.251088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.251105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.256507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.256701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.256719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.262012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.262187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.262205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.267366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.267509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.267526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.272692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.272856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.272875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.278242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.278418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.278436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.283761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.283906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.283924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.288806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.288973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.288991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.294385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.294505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.294527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.300089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.300305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.300323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.305655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.305783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.305801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.311425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.311606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.311623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.316526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.316676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.316694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.321858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.322015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.322033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.327346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.327503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.327521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.332676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.332838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.332856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.337933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.338073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.338091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.343581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.923 [2024-07-12 19:19:23.343785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.923 [2024-07-12 19:19:23.343803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.923 [2024-07-12 19:19:23.348964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.924 [2024-07-12 19:19:23.349115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.924 [2024-07-12 19:19:23.349133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.924 [2024-07-12 19:19:23.354269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.924 [2024-07-12 19:19:23.354452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.924 [2024-07-12 19:19:23.354470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.924 [2024-07-12 19:19:23.359528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.924 [2024-07-12 19:19:23.359660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.924 [2024-07-12 19:19:23.359679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.924 [2024-07-12 19:19:23.364734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.924 [2024-07-12 19:19:23.364871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.924 [2024-07-12 19:19:23.364889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.924 [2024-07-12 19:19:23.369932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.924 [2024-07-12 19:19:23.370083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.924 [2024-07-12 19:19:23.370101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.924 [2024-07-12 19:19:23.375404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.924 [2024-07-12 19:19:23.375578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.924 [2024-07-12 19:19:23.375596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.924 [2024-07-12 19:19:23.380692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.924 [2024-07-12 19:19:23.380819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.924 [2024-07-12 19:19:23.380837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.924 [2024-07-12 19:19:23.386122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.924 [2024-07-12 19:19:23.386298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.924 [2024-07-12 19:19:23.386316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.924 [2024-07-12 19:19:23.392023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.924 [2024-07-12 19:19:23.392111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.924 [2024-07-12 19:19:23.392129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.924 [2024-07-12 19:19:23.398316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.924 [2024-07-12 19:19:23.398422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.924 [2024-07-12 19:19:23.398441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.924 [2024-07-12 19:19:23.404284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.924 [2024-07-12 19:19:23.404377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.924 [2024-07-12 19:19:23.404395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.924 [2024-07-12 19:19:23.408877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.924 [2024-07-12 19:19:23.408946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.924 [2024-07-12 19:19:23.408963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.924 [2024-07-12 19:19:23.412901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa42810) with pdu=0x2000190fef90 00:27:20.924 [2024-07-12 19:19:23.413027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.924 [2024-07-12 19:19:23.413045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.924 00:27:20.924 Latency(us) 00:27:20.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.924 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:20.924 nvme0n1 : 2.00 6692.53 836.57 0.00 0.00 2386.74 1681.14 7351.43 00:27:20.924 =================================================================================================================== 00:27:20.924 Total : 6692.53 836.57 0.00 0.00 2386.74 1681.14 7351.43 00:27:20.924 0 00:27:20.924 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:20.924 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:20.924 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:20.924 | .driver_specific 00:27:20.924 | .nvme_error 00:27:20.924 | .status_code 00:27:20.924 | .command_transient_transport_error' 00:27:20.924 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:21.183 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 432 > 0 )) 00:27:21.183 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 453990 00:27:21.183 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 453990 ']' 00:27:21.183 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 453990 00:27:21.183 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:21.183 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:21.183 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 453990 00:27:21.183 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:21.183 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:21.183 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 453990' 00:27:21.183 killing process with pid 453990 00:27:21.183 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 453990 00:27:21.183 Received shutdown signal, test time was about 2.000000 seconds 00:27:21.183 00:27:21.183 Latency(us) 00:27:21.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.183 =================================================================================================================== 00:27:21.183 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:21.183 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 453990 00:27:21.442 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 451865 00:27:21.442 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 451865 ']' 00:27:21.442 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 451865 00:27:21.442 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:21.442 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:21.442 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 451865 00:27:21.442 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:21.442 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:21.442 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 451865' 00:27:21.442 killing process with pid 451865 00:27:21.442 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 451865 00:27:21.442 19:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 451865 00:27:21.700 00:27:21.700 real 0m16.988s 00:27:21.700 user 0m32.373s 00:27:21.700 sys 0m4.755s 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:21.700 ************************************ 00:27:21.700 END TEST nvmf_digest_error 00:27:21.700 ************************************ 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:21.700 rmmod nvme_tcp 00:27:21.700 rmmod nvme_fabrics 00:27:21.700 rmmod nvme_keyring 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 451865 ']' 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 451865 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 451865 ']' 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 451865 00:27:21.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (451865) - No such process 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 451865 is not found' 00:27:21.700 Process with pid 451865 is not found 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.700 19:19:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.234 19:19:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:24.234 00:27:24.234 real 0m42.260s 00:27:24.234 user 1m6.648s 00:27:24.234 sys 0m13.998s 00:27:24.234 19:19:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:24.234 19:19:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:24.234 ************************************ 00:27:24.234 END TEST nvmf_digest 00:27:24.234 ************************************ 00:27:24.234 19:19:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:24.234 19:19:26 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:27:24.234 19:19:26 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:27:24.234 19:19:26 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:27:24.234 19:19:26 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:24.234 19:19:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:24.234 19:19:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:24.234 19:19:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:24.234 ************************************ 00:27:24.234 START TEST nvmf_bdevperf 00:27:24.234 ************************************ 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:24.234 * Looking for test storage... 00:27:24.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.234 19:19:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:24.235 19:19:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:29.511 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:29.511 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:29.511 Found net devices under 0000:86:00.0: cvl_0_0 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.511 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:29.512 Found net devices under 0000:86:00.1: cvl_0_1 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.512 19:19:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.512 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.512 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.512 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:29.512 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:29.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:27:29.771 00:27:29.771 --- 10.0.0.2 ping statistics --- 00:27:29.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.771 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:27:29.771 00:27:29.771 --- 10.0.0.1 ping statistics --- 00:27:29.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.771 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=458091 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 458091 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 458091 ']' 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:29.771 19:19:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:29.771 [2024-07-12 19:19:32.226694] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:27:29.771 [2024-07-12 19:19:32.226743] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.771 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.771 [2024-07-12 19:19:32.295256] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:30.030 [2024-07-12 19:19:32.373908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.030 [2024-07-12 19:19:32.373947] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.030 [2024-07-12 19:19:32.373953] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.030 [2024-07-12 19:19:32.373959] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.030 [2024-07-12 19:19:32.373964] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.030 [2024-07-12 19:19:32.374144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.030 [2024-07-12 19:19:32.374036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.030 [2024-07-12 19:19:32.374146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 [2024-07-12 19:19:33.066997] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 Malloc0 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 [2024-07-12 19:19:33.128950] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.598 { 00:27:30.598 "params": { 00:27:30.598 "name": "Nvme$subsystem", 00:27:30.598 "trtype": "$TEST_TRANSPORT", 00:27:30.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.598 "adrfam": "ipv4", 00:27:30.598 "trsvcid": "$NVMF_PORT", 00:27:30.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.598 "hdgst": ${hdgst:-false}, 00:27:30.598 "ddgst": ${ddgst:-false} 00:27:30.598 }, 00:27:30.598 "method": "bdev_nvme_attach_controller" 00:27:30.598 } 00:27:30.598 EOF 00:27:30.598 )") 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:30.598 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:30.598 "params": { 00:27:30.598 "name": "Nvme1", 00:27:30.598 "trtype": "tcp", 00:27:30.598 "traddr": "10.0.0.2", 00:27:30.598 "adrfam": "ipv4", 00:27:30.598 "trsvcid": "4420", 00:27:30.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:30.598 "hdgst": false, 00:27:30.598 "ddgst": false 00:27:30.598 }, 00:27:30.599 "method": "bdev_nvme_attach_controller" 00:27:30.599 }' 00:27:30.858 [2024-07-12 19:19:33.178666] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:27:30.858 [2024-07-12 19:19:33.178712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458295 ] 00:27:30.858 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.858 [2024-07-12 19:19:33.246519] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.858 [2024-07-12 19:19:33.325775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.117 Running I/O for 1 seconds... 00:27:32.056 00:27:32.056 Latency(us) 00:27:32.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.056 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:32.056 Verification LBA range: start 0x0 length 0x4000 00:27:32.056 Nvme1n1 : 1.01 11240.48 43.91 0.00 0.00 11344.28 2179.78 12252.38 00:27:32.056 =================================================================================================================== 00:27:32.056 Total : 11240.48 43.91 0.00 0.00 11344.28 2179.78 12252.38 00:27:32.316 19:19:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=458574 00:27:32.316 19:19:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:32.316 19:19:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:32.316 19:19:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:32.316 19:19:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:32.316 19:19:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:32.316 19:19:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.316 19:19:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.316 { 00:27:32.316 "params": { 00:27:32.316 "name": "Nvme$subsystem", 00:27:32.316 "trtype": "$TEST_TRANSPORT", 00:27:32.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.316 "adrfam": "ipv4", 00:27:32.316 "trsvcid": "$NVMF_PORT", 00:27:32.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.316 "hdgst": ${hdgst:-false}, 00:27:32.316 "ddgst": ${ddgst:-false} 00:27:32.316 }, 00:27:32.316 "method": "bdev_nvme_attach_controller" 00:27:32.316 } 00:27:32.316 EOF 00:27:32.316 )") 00:27:32.316 19:19:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:32.316 19:19:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:32.316 19:19:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:32.316 19:19:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:32.316 "params": { 00:27:32.316 "name": "Nvme1", 00:27:32.316 "trtype": "tcp", 00:27:32.316 "traddr": "10.0.0.2", 00:27:32.316 "adrfam": "ipv4", 00:27:32.316 "trsvcid": "4420", 00:27:32.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:32.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:32.316 "hdgst": false, 00:27:32.316 "ddgst": false 00:27:32.316 }, 00:27:32.316 "method": "bdev_nvme_attach_controller" 00:27:32.316 }' 00:27:32.316 [2024-07-12 19:19:34.800928] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:27:32.316 [2024-07-12 19:19:34.800975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458574 ] 00:27:32.316 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.316 [2024-07-12 19:19:34.865593] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.576 [2024-07-12 19:19:34.934858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.835 Running I/O for 15 seconds... 00:27:35.375 19:19:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 458091 00:27:35.375 19:19:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:35.375 [2024-07-12 19:19:37.778501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.375 [2024-07-12 19:19:37.778539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.375 [2024-07-12 19:19:37.778559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.375 [2024-07-12 19:19:37.778569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.375 [2024-07-12 19:19:37.778579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.375 [2024-07-12 19:19:37.778586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.375 [2024-07-12 19:19:37.778596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.375 [2024-07-12 19:19:37.778603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.375 [2024-07-12 19:19:37.778613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.375 [2024-07-12 19:19:37.778622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.375 [2024-07-12 19:19:37.778630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.375 [2024-07-12 19:19:37.778637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.375 [2024-07-12 19:19:37.778648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.375 [2024-07-12 19:19:37.778657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.375 [2024-07-12 19:19:37.778667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.375 [2024-07-12 19:19:37.778674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.375 [2024-07-12 19:19:37.778682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.375 [2024-07-12 19:19:37.778689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.375 [2024-07-12 19:19:37.778697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.375 [2024-07-12 19:19:37.778704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.375 [2024-07-12 19:19:37.778717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.375 [2024-07-12 19:19:37.778726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.778984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.778990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.376 [2024-07-12 19:19:37.779397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.376 [2024-07-12 19:19:37.779407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.377 [2024-07-12 19:19:37.779414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.377 [2024-07-12 19:19:37.779428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.377 [2024-07-12 19:19:37.779443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.377 [2024-07-12 19:19:37.779458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.377 [2024-07-12 19:19:37.779475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.377 [2024-07-12 19:19:37.779494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.377 [2024-07-12 19:19:37.779514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.377 [2024-07-12 19:19:37.779538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.377 [2024-07-12 19:19:37.779977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.779985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.779991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.780000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.780007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.780015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.780021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.780029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.780035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.780043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.780049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.780057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.780063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.780072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.377 [2024-07-12 19:19:37.780078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.377 [2024-07-12 19:19:37.780086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.378 [2024-07-12 19:19:37.780558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.378 [2024-07-12 19:19:37.780572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.378 [2024-07-12 19:19:37.780587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.378 [2024-07-12 19:19:37.780602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1022c70 is same with the state(5) to be set 00:27:35.378 [2024-07-12 19:19:37.780618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:35.378 [2024-07-12 19:19:37.780623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:35.378 [2024-07-12 19:19:37.780628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94240 len:8 PRP1 0x0 PRP2 0x0 00:27:35.378 [2024-07-12 19:19:37.780636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.378 [2024-07-12 19:19:37.780680] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1022c70 was disconnected and freed. reset controller. 00:27:35.378 [2024-07-12 19:19:37.783531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.378 [2024-07-12 19:19:37.783583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.378 [2024-07-12 19:19:37.784162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-12 19:19:37.784177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.378 [2024-07-12 19:19:37.784184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.378 [2024-07-12 19:19:37.784368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.378 [2024-07-12 19:19:37.784548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.378 [2024-07-12 19:19:37.784557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.378 [2024-07-12 19:19:37.784564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.378 [2024-07-12 19:19:37.787410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.378 [2024-07-12 19:19:37.796943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.378 [2024-07-12 19:19:37.797310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.378 [2024-07-12 19:19:37.797329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.378 [2024-07-12 19:19:37.797337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.379 [2024-07-12 19:19:37.797517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.379 [2024-07-12 19:19:37.797696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.379 [2024-07-12 19:19:37.797709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.379 [2024-07-12 19:19:37.797716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.379 [2024-07-12 19:19:37.800556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.379 [2024-07-12 19:19:37.809930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.379 [2024-07-12 19:19:37.810368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-12 19:19:37.810411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.379 [2024-07-12 19:19:37.810433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.379 [2024-07-12 19:19:37.810889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.379 [2024-07-12 19:19:37.811053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.379 [2024-07-12 19:19:37.811062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.379 [2024-07-12 19:19:37.811068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.379 [2024-07-12 19:19:37.813834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.379 [2024-07-12 19:19:37.822842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.379 [2024-07-12 19:19:37.823262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-12 19:19:37.823278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.379 [2024-07-12 19:19:37.823285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.379 [2024-07-12 19:19:37.823449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.379 [2024-07-12 19:19:37.823612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.379 [2024-07-12 19:19:37.823622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.379 [2024-07-12 19:19:37.823628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.379 [2024-07-12 19:19:37.826322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.379 [2024-07-12 19:19:37.835691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.379 [2024-07-12 19:19:37.836124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-12 19:19:37.836141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.379 [2024-07-12 19:19:37.836148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.379 [2024-07-12 19:19:37.836326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.379 [2024-07-12 19:19:37.836500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.379 [2024-07-12 19:19:37.836509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.379 [2024-07-12 19:19:37.836515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.379 [2024-07-12 19:19:37.839179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.379 [2024-07-12 19:19:37.848626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.379 [2024-07-12 19:19:37.849076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-12 19:19:37.849119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.379 [2024-07-12 19:19:37.849142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.379 [2024-07-12 19:19:37.849663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.379 [2024-07-12 19:19:37.849838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.379 [2024-07-12 19:19:37.849847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.379 [2024-07-12 19:19:37.849854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.379 [2024-07-12 19:19:37.852502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.379 [2024-07-12 19:19:37.861559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.379 [2024-07-12 19:19:37.861987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-12 19:19:37.862033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.379 [2024-07-12 19:19:37.862055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.379 [2024-07-12 19:19:37.862643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.379 [2024-07-12 19:19:37.862850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.379 [2024-07-12 19:19:37.862860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.379 [2024-07-12 19:19:37.862866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.379 [2024-07-12 19:19:37.865495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.379 [2024-07-12 19:19:37.874436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.379 [2024-07-12 19:19:37.874848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-12 19:19:37.874866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.379 [2024-07-12 19:19:37.874872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.379 [2024-07-12 19:19:37.875034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.379 [2024-07-12 19:19:37.875197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.379 [2024-07-12 19:19:37.875206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.379 [2024-07-12 19:19:37.875212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.379 [2024-07-12 19:19:37.877854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.379 [2024-07-12 19:19:37.887408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.379 [2024-07-12 19:19:37.887798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-12 19:19:37.887815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.379 [2024-07-12 19:19:37.887822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.379 [2024-07-12 19:19:37.887988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.379 [2024-07-12 19:19:37.888151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.379 [2024-07-12 19:19:37.888160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.379 [2024-07-12 19:19:37.888166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.379 [2024-07-12 19:19:37.890862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.379 [2024-07-12 19:19:37.900203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.379 [2024-07-12 19:19:37.900631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-12 19:19:37.900674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.379 [2024-07-12 19:19:37.900697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.379 [2024-07-12 19:19:37.901179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.379 [2024-07-12 19:19:37.901369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.379 [2024-07-12 19:19:37.901378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.379 [2024-07-12 19:19:37.901384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.379 [2024-07-12 19:19:37.904051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.379 [2024-07-12 19:19:37.913172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.379 [2024-07-12 19:19:37.913622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.379 [2024-07-12 19:19:37.913664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.379 [2024-07-12 19:19:37.913685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.379 [2024-07-12 19:19:37.914198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.379 [2024-07-12 19:19:37.914391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.379 [2024-07-12 19:19:37.914401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.380 [2024-07-12 19:19:37.914407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.380 [2024-07-12 19:19:37.917072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.380 [2024-07-12 19:19:37.926059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.380 [2024-07-12 19:19:37.926460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-12 19:19:37.926476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.380 [2024-07-12 19:19:37.926483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.380 [2024-07-12 19:19:37.926646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.380 [2024-07-12 19:19:37.926809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.380 [2024-07-12 19:19:37.926818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.380 [2024-07-12 19:19:37.926828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.380 [2024-07-12 19:19:37.929530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.380 [2024-07-12 19:19:37.939168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.380 [2024-07-12 19:19:37.939575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.380 [2024-07-12 19:19:37.939593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.380 [2024-07-12 19:19:37.939600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.380 [2024-07-12 19:19:37.939771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.640 [2024-07-12 19:19:37.939943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.640 [2024-07-12 19:19:37.939955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.640 [2024-07-12 19:19:37.939961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.640 [2024-07-12 19:19:37.942776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.640 [2024-07-12 19:19:37.952162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.640 [2024-07-12 19:19:37.952503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.641 [2024-07-12 19:19:37.952520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.641 [2024-07-12 19:19:37.952527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.641 [2024-07-12 19:19:37.952700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.641 [2024-07-12 19:19:37.952872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.641 [2024-07-12 19:19:37.952882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.641 [2024-07-12 19:19:37.952889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.641 [2024-07-12 19:19:37.955589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.641 [2024-07-12 19:19:37.965032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.641 [2024-07-12 19:19:37.965457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.641 [2024-07-12 19:19:37.965474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.641 [2024-07-12 19:19:37.965481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.641 [2024-07-12 19:19:37.965643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.641 [2024-07-12 19:19:37.965806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.641 [2024-07-12 19:19:37.965815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.641 [2024-07-12 19:19:37.965821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.641 [2024-07-12 19:19:37.968521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.641 [2024-07-12 19:19:37.977830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.641 [2024-07-12 19:19:37.978248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.641 [2024-07-12 19:19:37.978289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.641 [2024-07-12 19:19:37.978311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.641 [2024-07-12 19:19:37.978899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.641 [2024-07-12 19:19:37.979063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.641 [2024-07-12 19:19:37.979071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.641 [2024-07-12 19:19:37.979077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.641 [2024-07-12 19:19:37.985256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.641 [2024-07-12 19:19:37.992859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.641 [2024-07-12 19:19:37.993315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.641 [2024-07-12 19:19:37.993358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.641 [2024-07-12 19:19:37.993381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.641 [2024-07-12 19:19:37.993933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.641 [2024-07-12 19:19:37.994188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.641 [2024-07-12 19:19:37.994200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.641 [2024-07-12 19:19:37.994209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.641 [2024-07-12 19:19:37.998283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.641 [2024-07-12 19:19:38.005865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.641 [2024-07-12 19:19:38.006299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.641 [2024-07-12 19:19:38.006343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.641 [2024-07-12 19:19:38.006365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.641 [2024-07-12 19:19:38.006944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.641 [2024-07-12 19:19:38.007387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.641 [2024-07-12 19:19:38.007396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.641 [2024-07-12 19:19:38.007403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.641 [2024-07-12 19:19:38.010140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.641 [2024-07-12 19:19:38.018760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.641 [2024-07-12 19:19:38.019172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.641 [2024-07-12 19:19:38.019189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.641 [2024-07-12 19:19:38.019196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.641 [2024-07-12 19:19:38.019387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.641 [2024-07-12 19:19:38.019564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.641 [2024-07-12 19:19:38.019574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.641 [2024-07-12 19:19:38.019580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.641 [2024-07-12 19:19:38.022240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.641 [2024-07-12 19:19:38.031583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.641 [2024-07-12 19:19:38.031997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.641 [2024-07-12 19:19:38.032014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.641 [2024-07-12 19:19:38.032022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.641 [2024-07-12 19:19:38.032194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.641 [2024-07-12 19:19:38.032391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.641 [2024-07-12 19:19:38.032402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.641 [2024-07-12 19:19:38.032408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.641 [2024-07-12 19:19:38.035240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.641 [2024-07-12 19:19:38.044788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.641 [2024-07-12 19:19:38.045220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.641 [2024-07-12 19:19:38.045244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.641 [2024-07-12 19:19:38.045252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.641 [2024-07-12 19:19:38.045430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.641 [2024-07-12 19:19:38.045607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.641 [2024-07-12 19:19:38.045616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.641 [2024-07-12 19:19:38.045623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.641 [2024-07-12 19:19:38.048463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.641 [2024-07-12 19:19:38.057890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.641 [2024-07-12 19:19:38.058322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.641 [2024-07-12 19:19:38.058338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.641 [2024-07-12 19:19:38.058345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.641 [2024-07-12 19:19:38.058517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.641 [2024-07-12 19:19:38.058690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.641 [2024-07-12 19:19:38.058699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.641 [2024-07-12 19:19:38.058706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.641 [2024-07-12 19:19:38.061457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.641 [2024-07-12 19:19:38.070874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.641 [2024-07-12 19:19:38.071219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.641 [2024-07-12 19:19:38.071242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.641 [2024-07-12 19:19:38.071249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.641 [2024-07-12 19:19:38.071420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.641 [2024-07-12 19:19:38.071593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.641 [2024-07-12 19:19:38.071602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.641 [2024-07-12 19:19:38.071608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.641 [2024-07-12 19:19:38.074358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.641 [2024-07-12 19:19:38.083797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.641 [2024-07-12 19:19:38.084207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.641 [2024-07-12 19:19:38.084255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.641 [2024-07-12 19:19:38.084281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.641 [2024-07-12 19:19:38.084860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.641 [2024-07-12 19:19:38.085451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.641 [2024-07-12 19:19:38.085478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.641 [2024-07-12 19:19:38.085509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.641 [2024-07-12 19:19:38.088262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.641 [2024-07-12 19:19:38.096635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.642 [2024-07-12 19:19:38.097044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.642 [2024-07-12 19:19:38.097087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.642 [2024-07-12 19:19:38.097108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.642 [2024-07-12 19:19:38.097526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.642 [2024-07-12 19:19:38.097700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.642 [2024-07-12 19:19:38.097709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.642 [2024-07-12 19:19:38.097716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.642 [2024-07-12 19:19:38.100365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.642 [2024-07-12 19:19:38.109508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.642 [2024-07-12 19:19:38.109926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.642 [2024-07-12 19:19:38.109942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.642 [2024-07-12 19:19:38.109953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.642 [2024-07-12 19:19:38.110125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.642 [2024-07-12 19:19:38.110304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.642 [2024-07-12 19:19:38.110313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.642 [2024-07-12 19:19:38.110320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.642 [2024-07-12 19:19:38.112987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.642 [2024-07-12 19:19:38.122336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.642 [2024-07-12 19:19:38.122747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.642 [2024-07-12 19:19:38.122785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.642 [2024-07-12 19:19:38.122809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.642 [2024-07-12 19:19:38.123366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.642 [2024-07-12 19:19:38.123530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.642 [2024-07-12 19:19:38.123538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.642 [2024-07-12 19:19:38.123544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.642 [2024-07-12 19:19:38.126141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.642 [2024-07-12 19:19:38.135216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.642 [2024-07-12 19:19:38.135606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.642 [2024-07-12 19:19:38.135622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.642 [2024-07-12 19:19:38.135629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.642 [2024-07-12 19:19:38.135792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.642 [2024-07-12 19:19:38.135956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.642 [2024-07-12 19:19:38.135965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.642 [2024-07-12 19:19:38.135971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.642 [2024-07-12 19:19:38.138690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.642 [2024-07-12 19:19:38.148133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.642 [2024-07-12 19:19:38.148478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.642 [2024-07-12 19:19:38.148494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.642 [2024-07-12 19:19:38.148501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.642 [2024-07-12 19:19:38.148663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.642 [2024-07-12 19:19:38.148826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.642 [2024-07-12 19:19:38.148838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.642 [2024-07-12 19:19:38.148844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.642 [2024-07-12 19:19:38.151538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.642 [2024-07-12 19:19:38.160978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.642 [2024-07-12 19:19:38.161398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.642 [2024-07-12 19:19:38.161415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.642 [2024-07-12 19:19:38.161421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.642 [2024-07-12 19:19:38.161584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.642 [2024-07-12 19:19:38.161746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.642 [2024-07-12 19:19:38.161755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.642 [2024-07-12 19:19:38.161761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.642 [2024-07-12 19:19:38.164455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.642 [2024-07-12 19:19:38.173851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.642 [2024-07-12 19:19:38.174206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.642 [2024-07-12 19:19:38.174222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.642 [2024-07-12 19:19:38.174236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.642 [2024-07-12 19:19:38.174423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.642 [2024-07-12 19:19:38.174596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.642 [2024-07-12 19:19:38.174605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.642 [2024-07-12 19:19:38.174611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.642 [2024-07-12 19:19:38.177266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.642 [2024-07-12 19:19:38.186775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.642 [2024-07-12 19:19:38.187162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.642 [2024-07-12 19:19:38.187179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.642 [2024-07-12 19:19:38.187187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.642 [2024-07-12 19:19:38.187376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.642 [2024-07-12 19:19:38.187550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.642 [2024-07-12 19:19:38.187559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.642 [2024-07-12 19:19:38.187565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.642 [2024-07-12 19:19:38.190228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.642 [2024-07-12 19:19:38.199682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.642 [2024-07-12 19:19:38.200021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.642 [2024-07-12 19:19:38.200037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.642 [2024-07-12 19:19:38.200044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.642 [2024-07-12 19:19:38.200207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.642 [2024-07-12 19:19:38.200400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.642 [2024-07-12 19:19:38.200410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.642 [2024-07-12 19:19:38.200416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.642 [2024-07-12 19:19:38.203140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.903 [2024-07-12 19:19:38.212722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.903 [2024-07-12 19:19:38.213183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.903 [2024-07-12 19:19:38.213200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.903 [2024-07-12 19:19:38.213208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.903 [2024-07-12 19:19:38.213405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.903 [2024-07-12 19:19:38.213584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.903 [2024-07-12 19:19:38.213593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.903 [2024-07-12 19:19:38.213600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.903 [2024-07-12 19:19:38.216281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.903 [2024-07-12 19:19:38.225582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.903 [2024-07-12 19:19:38.225994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.903 [2024-07-12 19:19:38.226010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.903 [2024-07-12 19:19:38.226017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.903 [2024-07-12 19:19:38.226179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.903 [2024-07-12 19:19:38.226369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.903 [2024-07-12 19:19:38.226379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.903 [2024-07-12 19:19:38.226386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.903 [2024-07-12 19:19:38.229115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.903 [2024-07-12 19:19:38.238509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.903 [2024-07-12 19:19:38.238909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.903 [2024-07-12 19:19:38.238951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.903 [2024-07-12 19:19:38.238973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.903 [2024-07-12 19:19:38.239574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.903 [2024-07-12 19:19:38.239749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.903 [2024-07-12 19:19:38.239757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.903 [2024-07-12 19:19:38.239763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.903 [2024-07-12 19:19:38.242418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.903 [2024-07-12 19:19:38.251321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.903 [2024-07-12 19:19:38.251664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.903 [2024-07-12 19:19:38.251681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.903 [2024-07-12 19:19:38.251688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.903 [2024-07-12 19:19:38.251850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.903 [2024-07-12 19:19:38.252013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.903 [2024-07-12 19:19:38.252022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.903 [2024-07-12 19:19:38.252028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.903 [2024-07-12 19:19:38.254631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.903 [2024-07-12 19:19:38.264321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.903 [2024-07-12 19:19:38.264745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.903 [2024-07-12 19:19:38.264761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.904 [2024-07-12 19:19:38.264769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.904 [2024-07-12 19:19:38.264931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.904 [2024-07-12 19:19:38.265095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.904 [2024-07-12 19:19:38.265104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.904 [2024-07-12 19:19:38.265110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.904 [2024-07-12 19:19:38.267751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.904 [2024-07-12 19:19:38.277205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.904 [2024-07-12 19:19:38.277600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.904 [2024-07-12 19:19:38.277616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.904 [2024-07-12 19:19:38.277624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.904 [2024-07-12 19:19:38.277786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.904 [2024-07-12 19:19:38.277949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.904 [2024-07-12 19:19:38.277958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.904 [2024-07-12 19:19:38.277967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.904 [2024-07-12 19:19:38.280716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.904 [2024-07-12 19:19:38.290300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.904 [2024-07-12 19:19:38.290726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.904 [2024-07-12 19:19:38.290743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.904 [2024-07-12 19:19:38.290750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.904 [2024-07-12 19:19:38.290921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.904 [2024-07-12 19:19:38.291093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.904 [2024-07-12 19:19:38.291103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.904 [2024-07-12 19:19:38.291109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.904 [2024-07-12 19:19:38.293944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.904 [2024-07-12 19:19:38.303418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.904 [2024-07-12 19:19:38.303858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.904 [2024-07-12 19:19:38.303901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.904 [2024-07-12 19:19:38.303923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.904 [2024-07-12 19:19:38.304517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.904 [2024-07-12 19:19:38.304995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.904 [2024-07-12 19:19:38.305004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.904 [2024-07-12 19:19:38.305011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.904 [2024-07-12 19:19:38.307902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.904 [2024-07-12 19:19:38.316389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.904 [2024-07-12 19:19:38.316805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.904 [2024-07-12 19:19:38.316846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.904 [2024-07-12 19:19:38.316868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.904 [2024-07-12 19:19:38.317461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.904 [2024-07-12 19:19:38.318035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.904 [2024-07-12 19:19:38.318045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.904 [2024-07-12 19:19:38.318052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.904 [2024-07-12 19:19:38.320687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.904 [2024-07-12 19:19:38.329309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.904 [2024-07-12 19:19:38.329729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.904 [2024-07-12 19:19:38.329745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.904 [2024-07-12 19:19:38.329753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.904 [2024-07-12 19:19:38.329915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.904 [2024-07-12 19:19:38.330078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.904 [2024-07-12 19:19:38.330086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.904 [2024-07-12 19:19:38.330092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.904 [2024-07-12 19:19:38.332789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.904 [2024-07-12 19:19:38.342284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.904 [2024-07-12 19:19:38.342571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.904 [2024-07-12 19:19:38.342609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.904 [2024-07-12 19:19:38.342633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.904 [2024-07-12 19:19:38.343155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.904 [2024-07-12 19:19:38.343339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.904 [2024-07-12 19:19:38.343348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.904 [2024-07-12 19:19:38.343355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.904 [2024-07-12 19:19:38.346030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.904 [2024-07-12 19:19:38.355201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.904 [2024-07-12 19:19:38.355558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.904 [2024-07-12 19:19:38.355575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.904 [2024-07-12 19:19:38.355583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.904 [2024-07-12 19:19:38.355745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.904 [2024-07-12 19:19:38.355908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.904 [2024-07-12 19:19:38.355917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.904 [2024-07-12 19:19:38.355923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.904 [2024-07-12 19:19:38.358561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.904 [2024-07-12 19:19:38.368184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.904 [2024-07-12 19:19:38.368522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.904 [2024-07-12 19:19:38.368539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.904 [2024-07-12 19:19:38.368546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.904 [2024-07-12 19:19:38.368709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.904 [2024-07-12 19:19:38.368877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.904 [2024-07-12 19:19:38.368887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.904 [2024-07-12 19:19:38.368893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.904 [2024-07-12 19:19:38.371589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.904 [2024-07-12 19:19:38.380985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.904 [2024-07-12 19:19:38.381374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.904 [2024-07-12 19:19:38.381390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.904 [2024-07-12 19:19:38.381398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.904 [2024-07-12 19:19:38.381561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.904 [2024-07-12 19:19:38.381724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.904 [2024-07-12 19:19:38.381733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.904 [2024-07-12 19:19:38.381739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.904 [2024-07-12 19:19:38.384489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.904 [2024-07-12 19:19:38.393884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.904 [2024-07-12 19:19:38.394246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.904 [2024-07-12 19:19:38.394290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.904 [2024-07-12 19:19:38.394312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.904 [2024-07-12 19:19:38.394892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.904 [2024-07-12 19:19:38.395333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.904 [2024-07-12 19:19:38.395343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.904 [2024-07-12 19:19:38.395350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.904 [2024-07-12 19:19:38.398020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.905 [2024-07-12 19:19:38.406961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.905 [2024-07-12 19:19:38.407374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.905 [2024-07-12 19:19:38.407391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.905 [2024-07-12 19:19:38.407399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.905 [2024-07-12 19:19:38.407562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.905 [2024-07-12 19:19:38.407726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.905 [2024-07-12 19:19:38.407734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.905 [2024-07-12 19:19:38.407741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.905 [2024-07-12 19:19:38.410428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.905 [2024-07-12 19:19:38.419984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.905 [2024-07-12 19:19:38.420412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.905 [2024-07-12 19:19:38.420457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.905 [2024-07-12 19:19:38.420481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.905 [2024-07-12 19:19:38.421060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.905 [2024-07-12 19:19:38.421286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.905 [2024-07-12 19:19:38.421295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.905 [2024-07-12 19:19:38.421301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.905 [2024-07-12 19:19:38.423895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.905 [2024-07-12 19:19:38.432862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.905 [2024-07-12 19:19:38.433259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.905 [2024-07-12 19:19:38.433276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.905 [2024-07-12 19:19:38.433283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.905 [2024-07-12 19:19:38.433446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.905 [2024-07-12 19:19:38.433609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.905 [2024-07-12 19:19:38.433618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.905 [2024-07-12 19:19:38.433624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.905 [2024-07-12 19:19:38.436267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.905 [2024-07-12 19:19:38.445836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.905 [2024-07-12 19:19:38.446258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.905 [2024-07-12 19:19:38.446275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.905 [2024-07-12 19:19:38.446282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.905 [2024-07-12 19:19:38.446454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.905 [2024-07-12 19:19:38.446627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.905 [2024-07-12 19:19:38.446636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.905 [2024-07-12 19:19:38.446642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.905 [2024-07-12 19:19:38.449352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.905 [2024-07-12 19:19:38.458652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.905 [2024-07-12 19:19:38.459082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.905 [2024-07-12 19:19:38.459126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:35.905 [2024-07-12 19:19:38.459156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:35.905 [2024-07-12 19:19:38.459600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:35.905 [2024-07-12 19:19:38.459775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.905 [2024-07-12 19:19:38.459784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.905 [2024-07-12 19:19:38.459790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.905 [2024-07-12 19:19:38.462441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.166 [2024-07-12 19:19:38.471664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.166 [2024-07-12 19:19:38.472007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.166 [2024-07-12 19:19:38.472023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.166 [2024-07-12 19:19:38.472030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.166 [2024-07-12 19:19:38.472193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.166 [2024-07-12 19:19:38.472386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.166 [2024-07-12 19:19:38.472402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.166 [2024-07-12 19:19:38.472409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.166 [2024-07-12 19:19:38.475159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.166 [2024-07-12 19:19:38.484502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.166 [2024-07-12 19:19:38.484859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.166 [2024-07-12 19:19:38.484875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.166 [2024-07-12 19:19:38.484882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.166 [2024-07-12 19:19:38.485044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.166 [2024-07-12 19:19:38.485208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.166 [2024-07-12 19:19:38.485217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.166 [2024-07-12 19:19:38.485230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.166 [2024-07-12 19:19:38.487922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.166 [2024-07-12 19:19:38.497419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.166 [2024-07-12 19:19:38.497811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.166 [2024-07-12 19:19:38.497827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.166 [2024-07-12 19:19:38.497834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.166 [2024-07-12 19:19:38.497997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.166 [2024-07-12 19:19:38.498161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.166 [2024-07-12 19:19:38.498173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.166 [2024-07-12 19:19:38.498180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.166 [2024-07-12 19:19:38.500875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.166 [2024-07-12 19:19:38.510358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.166 [2024-07-12 19:19:38.510720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.166 [2024-07-12 19:19:38.510737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.166 [2024-07-12 19:19:38.510744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.166 [2024-07-12 19:19:38.510917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.166 [2024-07-12 19:19:38.511090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.166 [2024-07-12 19:19:38.511100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.166 [2024-07-12 19:19:38.511106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.166 [2024-07-12 19:19:38.513797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.166 [2024-07-12 19:19:38.523301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.166 [2024-07-12 19:19:38.523709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.166 [2024-07-12 19:19:38.523725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.166 [2024-07-12 19:19:38.523733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.167 [2024-07-12 19:19:38.523895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.167 [2024-07-12 19:19:38.524059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.167 [2024-07-12 19:19:38.524067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.167 [2024-07-12 19:19:38.524073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.167 [2024-07-12 19:19:38.526774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.167 [2024-07-12 19:19:38.536154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.167 [2024-07-12 19:19:38.536530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.167 [2024-07-12 19:19:38.536548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.167 [2024-07-12 19:19:38.536555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.167 [2024-07-12 19:19:38.536728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.167 [2024-07-12 19:19:38.536901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.167 [2024-07-12 19:19:38.536911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.167 [2024-07-12 19:19:38.536917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.167 [2024-07-12 19:19:38.539676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.167 [2024-07-12 19:19:38.549108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.167 [2024-07-12 19:19:38.549443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.167 [2024-07-12 19:19:38.549461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.167 [2024-07-12 19:19:38.549468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.167 [2024-07-12 19:19:38.549640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.167 [2024-07-12 19:19:38.549812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.167 [2024-07-12 19:19:38.549822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.167 [2024-07-12 19:19:38.549828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.167 [2024-07-12 19:19:38.552683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.167 [2024-07-12 19:19:38.562171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.167 [2024-07-12 19:19:38.562513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.167 [2024-07-12 19:19:38.562531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.167 [2024-07-12 19:19:38.562537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.167 [2024-07-12 19:19:38.562709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.167 [2024-07-12 19:19:38.562881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.167 [2024-07-12 19:19:38.562890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.167 [2024-07-12 19:19:38.562897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.167 [2024-07-12 19:19:38.565652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.167 [2024-07-12 19:19:38.575206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.167 [2024-07-12 19:19:38.575487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.167 [2024-07-12 19:19:38.575503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.167 [2024-07-12 19:19:38.575510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.167 [2024-07-12 19:19:38.575672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.167 [2024-07-12 19:19:38.575836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.167 [2024-07-12 19:19:38.575845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.167 [2024-07-12 19:19:38.575850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.167 [2024-07-12 19:19:38.578553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.167 [2024-07-12 19:19:38.588432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.167 [2024-07-12 19:19:38.588714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.167 [2024-07-12 19:19:38.588731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.167 [2024-07-12 19:19:38.588739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.167 [2024-07-12 19:19:38.588921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.167 [2024-07-12 19:19:38.589099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.167 [2024-07-12 19:19:38.589109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.167 [2024-07-12 19:19:38.589116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.167 [2024-07-12 19:19:38.591984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.167 [2024-07-12 19:19:38.601499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.167 [2024-07-12 19:19:38.601836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.167 [2024-07-12 19:19:38.601853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.167 [2024-07-12 19:19:38.601860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.167 [2024-07-12 19:19:38.602023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.167 [2024-07-12 19:19:38.602187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.167 [2024-07-12 19:19:38.602196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.167 [2024-07-12 19:19:38.602202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.167 [2024-07-12 19:19:38.604860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.167 [2024-07-12 19:19:38.614475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.167 [2024-07-12 19:19:38.614808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.167 [2024-07-12 19:19:38.614825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.167 [2024-07-12 19:19:38.614833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.167 [2024-07-12 19:19:38.615004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.167 [2024-07-12 19:19:38.615187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.167 [2024-07-12 19:19:38.615195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.167 [2024-07-12 19:19:38.615201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.167 [2024-07-12 19:19:38.617856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.167 [2024-07-12 19:19:38.627510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.167 [2024-07-12 19:19:38.627916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.167 [2024-07-12 19:19:38.627958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.167 [2024-07-12 19:19:38.627980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.167 [2024-07-12 19:19:38.628575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.167 [2024-07-12 19:19:38.628782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.167 [2024-07-12 19:19:38.628792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.167 [2024-07-12 19:19:38.628801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.167 [2024-07-12 19:19:38.631534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.167 [2024-07-12 19:19:38.640487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.167 [2024-07-12 19:19:38.640806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.167 [2024-07-12 19:19:38.640823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.167 [2024-07-12 19:19:38.640830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.167 [2024-07-12 19:19:38.640993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.167 [2024-07-12 19:19:38.641156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.167 [2024-07-12 19:19:38.641165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.167 [2024-07-12 19:19:38.641171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.167 [2024-07-12 19:19:38.643875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.167 [2024-07-12 19:19:38.653460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.167 [2024-07-12 19:19:38.653743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.167 [2024-07-12 19:19:38.653759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.167 [2024-07-12 19:19:38.653767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.167 [2024-07-12 19:19:38.653930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.167 [2024-07-12 19:19:38.654094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.167 [2024-07-12 19:19:38.654103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.167 [2024-07-12 19:19:38.654109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.167 [2024-07-12 19:19:38.656813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.167 [2024-07-12 19:19:38.666431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.167 [2024-07-12 19:19:38.666759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.167 [2024-07-12 19:19:38.666776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.167 [2024-07-12 19:19:38.666783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.168 [2024-07-12 19:19:38.666945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.168 [2024-07-12 19:19:38.667108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.168 [2024-07-12 19:19:38.667118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.168 [2024-07-12 19:19:38.667124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.168 [2024-07-12 19:19:38.669870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.168 [2024-07-12 19:19:38.679316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.168 [2024-07-12 19:19:38.679696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.168 [2024-07-12 19:19:38.679712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.168 [2024-07-12 19:19:38.679719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.168 [2024-07-12 19:19:38.679890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.168 [2024-07-12 19:19:38.680062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.168 [2024-07-12 19:19:38.680071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.168 [2024-07-12 19:19:38.680078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.168 [2024-07-12 19:19:38.682767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.168 [2024-07-12 19:19:38.692319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.168 [2024-07-12 19:19:38.692721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.168 [2024-07-12 19:19:38.692763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.168 [2024-07-12 19:19:38.692785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.168 [2024-07-12 19:19:38.693378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.168 [2024-07-12 19:19:38.693948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.168 [2024-07-12 19:19:38.693958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.168 [2024-07-12 19:19:38.693964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.168 [2024-07-12 19:19:38.696611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.168 [2024-07-12 19:19:38.705325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.168 [2024-07-12 19:19:38.705676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.168 [2024-07-12 19:19:38.705693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.168 [2024-07-12 19:19:38.705701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.168 [2024-07-12 19:19:38.705864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.168 [2024-07-12 19:19:38.706028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.168 [2024-07-12 19:19:38.706037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.168 [2024-07-12 19:19:38.706042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.168 [2024-07-12 19:19:38.708789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.168 [2024-07-12 19:19:38.718314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.168 [2024-07-12 19:19:38.718657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.168 [2024-07-12 19:19:38.718674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.168 [2024-07-12 19:19:38.718681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.168 [2024-07-12 19:19:38.718848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.168 [2024-07-12 19:19:38.719011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.168 [2024-07-12 19:19:38.719020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.168 [2024-07-12 19:19:38.719027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.168 [2024-07-12 19:19:38.721672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.168 [2024-07-12 19:19:38.731339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.168 [2024-07-12 19:19:38.731719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.168 [2024-07-12 19:19:38.731736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.168 [2024-07-12 19:19:38.731744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.168 [2024-07-12 19:19:38.731915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.429 [2024-07-12 19:19:38.732088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.429 [2024-07-12 19:19:38.732098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.429 [2024-07-12 19:19:38.732106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.429 [2024-07-12 19:19:38.734890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.429 [2024-07-12 19:19:38.744281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.429 [2024-07-12 19:19:38.744611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.429 [2024-07-12 19:19:38.744628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.429 [2024-07-12 19:19:38.744636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.429 [2024-07-12 19:19:38.744808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.429 [2024-07-12 19:19:38.744982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.429 [2024-07-12 19:19:38.744992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.429 [2024-07-12 19:19:38.744998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.429 [2024-07-12 19:19:38.747736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.429 [2024-07-12 19:19:38.757189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.429 [2024-07-12 19:19:38.757514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.429 [2024-07-12 19:19:38.757531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.429 [2024-07-12 19:19:38.757538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.429 [2024-07-12 19:19:38.757700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.429 [2024-07-12 19:19:38.757864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.429 [2024-07-12 19:19:38.757873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.429 [2024-07-12 19:19:38.757878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.429 [2024-07-12 19:19:38.760630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.429 [2024-07-12 19:19:38.770259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.429 [2024-07-12 19:19:38.770598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.429 [2024-07-12 19:19:38.770615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.429 [2024-07-12 19:19:38.770622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.429 [2024-07-12 19:19:38.770795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.429 [2024-07-12 19:19:38.770967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.429 [2024-07-12 19:19:38.770976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.429 [2024-07-12 19:19:38.770983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.429 [2024-07-12 19:19:38.773731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.429 [2024-07-12 19:19:38.783329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.429 [2024-07-12 19:19:38.783737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.429 [2024-07-12 19:19:38.783753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.429 [2024-07-12 19:19:38.783761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.429 [2024-07-12 19:19:38.783939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.429 [2024-07-12 19:19:38.784118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.429 [2024-07-12 19:19:38.784128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.429 [2024-07-12 19:19:38.784134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.429 [2024-07-12 19:19:38.787123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.429 [2024-07-12 19:19:38.796369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.429 [2024-07-12 19:19:38.796654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.429 [2024-07-12 19:19:38.796672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.429 [2024-07-12 19:19:38.796679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.429 [2024-07-12 19:19:38.796853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.429 [2024-07-12 19:19:38.797027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.429 [2024-07-12 19:19:38.797036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.429 [2024-07-12 19:19:38.797042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.429 [2024-07-12 19:19:38.799869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.429 [2024-07-12 19:19:38.809432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.429 [2024-07-12 19:19:38.809826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.429 [2024-07-12 19:19:38.809869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.429 [2024-07-12 19:19:38.809898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.429 [2024-07-12 19:19:38.810351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.429 [2024-07-12 19:19:38.810538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.429 [2024-07-12 19:19:38.810548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.429 [2024-07-12 19:19:38.810554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.429 [2024-07-12 19:19:38.813363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.429 [2024-07-12 19:19:38.822365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.429 [2024-07-12 19:19:38.822719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.429 [2024-07-12 19:19:38.822736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.429 [2024-07-12 19:19:38.822743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.429 [2024-07-12 19:19:38.822916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.429 [2024-07-12 19:19:38.823088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.429 [2024-07-12 19:19:38.823098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.429 [2024-07-12 19:19:38.823104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.429 [2024-07-12 19:19:38.825747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.429 [2024-07-12 19:19:38.835344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.429 [2024-07-12 19:19:38.835640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.429 [2024-07-12 19:19:38.835656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.429 [2024-07-12 19:19:38.835663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.429 [2024-07-12 19:19:38.835825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.429 [2024-07-12 19:19:38.835988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.429 [2024-07-12 19:19:38.835998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.429 [2024-07-12 19:19:38.836004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.429 [2024-07-12 19:19:38.838709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.429 [2024-07-12 19:19:38.848348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.429 [2024-07-12 19:19:38.848631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.429 [2024-07-12 19:19:38.848648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.429 [2024-07-12 19:19:38.848656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.429 [2024-07-12 19:19:38.848829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.429 [2024-07-12 19:19:38.849005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.429 [2024-07-12 19:19:38.849015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.429 [2024-07-12 19:19:38.849022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.429 [2024-07-12 19:19:38.851668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.429 [2024-07-12 19:19:38.861324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.429 [2024-07-12 19:19:38.861727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.429 [2024-07-12 19:19:38.861743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.430 [2024-07-12 19:19:38.861749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.430 [2024-07-12 19:19:38.861913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.430 [2024-07-12 19:19:38.862076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.430 [2024-07-12 19:19:38.862085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.430 [2024-07-12 19:19:38.862092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.430 [2024-07-12 19:19:38.864895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.430 [2024-07-12 19:19:38.874485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.430 [2024-07-12 19:19:38.874841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.430 [2024-07-12 19:19:38.874858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.430 [2024-07-12 19:19:38.874865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.430 [2024-07-12 19:19:38.875043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.430 [2024-07-12 19:19:38.875221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.430 [2024-07-12 19:19:38.875236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.430 [2024-07-12 19:19:38.875243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.430 [2024-07-12 19:19:38.878082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.430 [2024-07-12 19:19:38.887644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.430 [2024-07-12 19:19:38.888076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.430 [2024-07-12 19:19:38.888093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.430 [2024-07-12 19:19:38.888101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.430 [2024-07-12 19:19:38.888284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.430 [2024-07-12 19:19:38.888464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.430 [2024-07-12 19:19:38.888473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.430 [2024-07-12 19:19:38.888480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.430 [2024-07-12 19:19:38.891317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.430 [2024-07-12 19:19:38.900699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.430 [2024-07-12 19:19:38.901132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.430 [2024-07-12 19:19:38.901149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.430 [2024-07-12 19:19:38.901157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.430 [2024-07-12 19:19:38.901341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.430 [2024-07-12 19:19:38.901519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.430 [2024-07-12 19:19:38.901528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.430 [2024-07-12 19:19:38.901534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.430 [2024-07-12 19:19:38.904371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.430 [2024-07-12 19:19:38.913787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.430 [2024-07-12 19:19:38.914214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.430 [2024-07-12 19:19:38.914238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.430 [2024-07-12 19:19:38.914246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.430 [2024-07-12 19:19:38.914424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.430 [2024-07-12 19:19:38.914602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.430 [2024-07-12 19:19:38.914611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.430 [2024-07-12 19:19:38.914618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.430 [2024-07-12 19:19:38.917458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.430 [2024-07-12 19:19:38.926836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.430 [2024-07-12 19:19:38.927260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.430 [2024-07-12 19:19:38.927278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.430 [2024-07-12 19:19:38.927285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.430 [2024-07-12 19:19:38.927463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.430 [2024-07-12 19:19:38.927643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.430 [2024-07-12 19:19:38.927653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.430 [2024-07-12 19:19:38.927659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.430 [2024-07-12 19:19:38.930504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.430 [2024-07-12 19:19:38.939882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.430 [2024-07-12 19:19:38.940290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.430 [2024-07-12 19:19:38.940307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.430 [2024-07-12 19:19:38.940318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.430 [2024-07-12 19:19:38.940495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.430 [2024-07-12 19:19:38.940673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.430 [2024-07-12 19:19:38.940682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.430 [2024-07-12 19:19:38.940688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.430 [2024-07-12 19:19:38.943527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.430 [2024-07-12 19:19:38.953061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.430 [2024-07-12 19:19:38.953497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.430 [2024-07-12 19:19:38.953515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.430 [2024-07-12 19:19:38.953522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.430 [2024-07-12 19:19:38.953699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.430 [2024-07-12 19:19:38.953876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.430 [2024-07-12 19:19:38.953886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.430 [2024-07-12 19:19:38.953893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.430 [2024-07-12 19:19:38.956729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.430 [2024-07-12 19:19:38.966251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.430 [2024-07-12 19:19:38.966678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.430 [2024-07-12 19:19:38.966715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.430 [2024-07-12 19:19:38.966737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.430 [2024-07-12 19:19:38.967326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.430 [2024-07-12 19:19:38.967577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.430 [2024-07-12 19:19:38.967587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.430 [2024-07-12 19:19:38.967593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.430 [2024-07-12 19:19:38.970423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.430 [2024-07-12 19:19:38.979297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.430 [2024-07-12 19:19:38.979703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.430 [2024-07-12 19:19:38.979719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.430 [2024-07-12 19:19:38.979728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.430 [2024-07-12 19:19:38.979905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.430 [2024-07-12 19:19:38.980083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.430 [2024-07-12 19:19:38.980092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.430 [2024-07-12 19:19:38.980106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.430 [2024-07-12 19:19:38.982950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.430 [2024-07-12 19:19:38.992444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.430 [2024-07-12 19:19:38.992851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.430 [2024-07-12 19:19:38.992868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.430 [2024-07-12 19:19:38.992876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.430 [2024-07-12 19:19:38.993052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.430 [2024-07-12 19:19:38.993237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.430 [2024-07-12 19:19:38.993247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.430 [2024-07-12 19:19:38.993254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.690 [2024-07-12 19:19:38.996092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.690 [2024-07-12 19:19:39.005369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.690 [2024-07-12 19:19:39.005765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.690 [2024-07-12 19:19:39.005782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.690 [2024-07-12 19:19:39.005790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.690 [2024-07-12 19:19:39.005952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.690 [2024-07-12 19:19:39.006115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.690 [2024-07-12 19:19:39.006124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.690 [2024-07-12 19:19:39.006130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.690 [2024-07-12 19:19:39.008832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.690 [2024-07-12 19:19:39.018379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.690 [2024-07-12 19:19:39.018800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.690 [2024-07-12 19:19:39.018842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.690 [2024-07-12 19:19:39.018865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.690 [2024-07-12 19:19:39.019456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.690 [2024-07-12 19:19:39.019889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.690 [2024-07-12 19:19:39.019898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.690 [2024-07-12 19:19:39.019904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.690 [2024-07-12 19:19:39.025426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.690 [2024-07-12 19:19:39.033693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.690 [2024-07-12 19:19:39.034214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.690 [2024-07-12 19:19:39.034273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.690 [2024-07-12 19:19:39.034297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.690 [2024-07-12 19:19:39.034857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.690 [2024-07-12 19:19:39.035113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.690 [2024-07-12 19:19:39.035124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.690 [2024-07-12 19:19:39.035134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.690 [2024-07-12 19:19:39.039195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.690 [2024-07-12 19:19:39.046778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.690 [2024-07-12 19:19:39.047204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.690 [2024-07-12 19:19:39.047221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.690 [2024-07-12 19:19:39.047235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.690 [2024-07-12 19:19:39.047407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.690 [2024-07-12 19:19:39.047581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.690 [2024-07-12 19:19:39.047590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.690 [2024-07-12 19:19:39.047597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.690 [2024-07-12 19:19:39.050308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.691 [2024-07-12 19:19:39.059823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.691 [2024-07-12 19:19:39.060103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.691 [2024-07-12 19:19:39.060120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.691 [2024-07-12 19:19:39.060127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.691 [2024-07-12 19:19:39.060309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.691 [2024-07-12 19:19:39.060489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.691 [2024-07-12 19:19:39.060499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.691 [2024-07-12 19:19:39.060505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.691 [2024-07-12 19:19:39.063342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.691 [2024-07-12 19:19:39.072866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.691 [2024-07-12 19:19:39.073296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.691 [2024-07-12 19:19:39.073313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.691 [2024-07-12 19:19:39.073321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.691 [2024-07-12 19:19:39.073502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.691 [2024-07-12 19:19:39.073681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.691 [2024-07-12 19:19:39.073691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.691 [2024-07-12 19:19:39.073697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.691 [2024-07-12 19:19:39.076542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.691 [2024-07-12 19:19:39.085983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.691 [2024-07-12 19:19:39.086355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.691 [2024-07-12 19:19:39.086372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.691 [2024-07-12 19:19:39.086380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.691 [2024-07-12 19:19:39.086558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.691 [2024-07-12 19:19:39.086736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.691 [2024-07-12 19:19:39.086745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.691 [2024-07-12 19:19:39.086752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.691 [2024-07-12 19:19:39.089586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.691 [2024-07-12 19:19:39.099121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.691 [2024-07-12 19:19:39.099561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.691 [2024-07-12 19:19:39.099578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.691 [2024-07-12 19:19:39.099585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.691 [2024-07-12 19:19:39.099763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.691 [2024-07-12 19:19:39.099941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.691 [2024-07-12 19:19:39.099950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.691 [2024-07-12 19:19:39.099957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.691 [2024-07-12 19:19:39.102787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.691 [2024-07-12 19:19:39.112458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.691 [2024-07-12 19:19:39.112872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.691 [2024-07-12 19:19:39.112889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.691 [2024-07-12 19:19:39.112897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.691 [2024-07-12 19:19:39.113074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.691 [2024-07-12 19:19:39.113256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.691 [2024-07-12 19:19:39.113266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.691 [2024-07-12 19:19:39.113273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.691 [2024-07-12 19:19:39.116101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.691 [2024-07-12 19:19:39.125606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.691 [2024-07-12 19:19:39.125962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.691 [2024-07-12 19:19:39.125979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.691 [2024-07-12 19:19:39.125987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.691 [2024-07-12 19:19:39.126164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.691 [2024-07-12 19:19:39.126347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.691 [2024-07-12 19:19:39.126357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.691 [2024-07-12 19:19:39.126363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.691 [2024-07-12 19:19:39.129190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.691 [2024-07-12 19:19:39.138724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.691 [2024-07-12 19:19:39.139136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.691 [2024-07-12 19:19:39.139152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.691 [2024-07-12 19:19:39.139160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.691 [2024-07-12 19:19:39.139343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.691 [2024-07-12 19:19:39.139522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.691 [2024-07-12 19:19:39.139531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.691 [2024-07-12 19:19:39.139537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.691 [2024-07-12 19:19:39.142370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.691 [2024-07-12 19:19:39.151513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.691 [2024-07-12 19:19:39.151854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.691 [2024-07-12 19:19:39.151870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.691 [2024-07-12 19:19:39.151877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.691 [2024-07-12 19:19:39.152039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.691 [2024-07-12 19:19:39.152202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.691 [2024-07-12 19:19:39.152211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.691 [2024-07-12 19:19:39.152217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.691 [2024-07-12 19:19:39.154909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.691 [2024-07-12 19:19:39.164364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.691 [2024-07-12 19:19:39.164758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.691 [2024-07-12 19:19:39.164777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.691 [2024-07-12 19:19:39.164784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.691 [2024-07-12 19:19:39.164946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.691 [2024-07-12 19:19:39.165109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.691 [2024-07-12 19:19:39.165118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.691 [2024-07-12 19:19:39.165124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.691 [2024-07-12 19:19:39.167813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.691 [2024-07-12 19:19:39.177146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.691 [2024-07-12 19:19:39.177568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.691 [2024-07-12 19:19:39.177621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.691 [2024-07-12 19:19:39.177643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.691 [2024-07-12 19:19:39.178191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.691 [2024-07-12 19:19:39.178384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.691 [2024-07-12 19:19:39.178394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.691 [2024-07-12 19:19:39.178401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.691 [2024-07-12 19:19:39.181063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.691 [2024-07-12 19:19:39.190035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.691 [2024-07-12 19:19:39.190291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.691 [2024-07-12 19:19:39.190308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.691 [2024-07-12 19:19:39.190315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.691 [2024-07-12 19:19:39.190487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.691 [2024-07-12 19:19:39.190660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.691 [2024-07-12 19:19:39.190669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.691 [2024-07-12 19:19:39.190675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.691 [2024-07-12 19:19:39.193385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.692 [2024-07-12 19:19:39.202924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.692 [2024-07-12 19:19:39.203323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.692 [2024-07-12 19:19:39.203341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.692 [2024-07-12 19:19:39.203348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.692 [2024-07-12 19:19:39.203510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.692 [2024-07-12 19:19:39.203677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.692 [2024-07-12 19:19:39.203686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.692 [2024-07-12 19:19:39.203691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.692 [2024-07-12 19:19:39.206391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.692 [2024-07-12 19:19:39.215779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.692 [2024-07-12 19:19:39.216191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.692 [2024-07-12 19:19:39.216207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.692 [2024-07-12 19:19:39.216214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.692 [2024-07-12 19:19:39.216404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.692 [2024-07-12 19:19:39.216578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.692 [2024-07-12 19:19:39.216587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.692 [2024-07-12 19:19:39.216593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.692 [2024-07-12 19:19:39.219253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.692 [2024-07-12 19:19:39.228594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.692 [2024-07-12 19:19:39.228947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.692 [2024-07-12 19:19:39.228989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.692 [2024-07-12 19:19:39.229011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.692 [2024-07-12 19:19:39.229491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.692 [2024-07-12 19:19:39.229666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.692 [2024-07-12 19:19:39.229675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.692 [2024-07-12 19:19:39.229682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.692 [2024-07-12 19:19:39.232332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.692 [2024-07-12 19:19:39.241473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.692 [2024-07-12 19:19:39.241872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.692 [2024-07-12 19:19:39.241889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.692 [2024-07-12 19:19:39.241896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.692 [2024-07-12 19:19:39.242059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.692 [2024-07-12 19:19:39.242222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.692 [2024-07-12 19:19:39.242239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.692 [2024-07-12 19:19:39.242245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.692 [2024-07-12 19:19:39.244931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.692 [2024-07-12 19:19:39.254590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.692 [2024-07-12 19:19:39.254938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.692 [2024-07-12 19:19:39.254980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.692 [2024-07-12 19:19:39.255003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.692 [2024-07-12 19:19:39.255490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.692 [2024-07-12 19:19:39.255670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.692 [2024-07-12 19:19:39.255680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.692 [2024-07-12 19:19:39.255686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.952 [2024-07-12 19:19:39.258549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.952 [2024-07-12 19:19:39.267421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.952 [2024-07-12 19:19:39.267744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.952 [2024-07-12 19:19:39.267759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.952 [2024-07-12 19:19:39.267768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.952 [2024-07-12 19:19:39.267931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.952 [2024-07-12 19:19:39.268094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.952 [2024-07-12 19:19:39.268103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.952 [2024-07-12 19:19:39.268109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.952 [2024-07-12 19:19:39.270799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.952 [2024-07-12 19:19:39.280229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.952 [2024-07-12 19:19:39.280582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.952 [2024-07-12 19:19:39.280599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.952 [2024-07-12 19:19:39.280605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.953 [2024-07-12 19:19:39.280767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.953 [2024-07-12 19:19:39.280930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.953 [2024-07-12 19:19:39.280939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.953 [2024-07-12 19:19:39.280944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.953 [2024-07-12 19:19:39.283541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.953 [2024-07-12 19:19:39.293123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.953 [2024-07-12 19:19:39.293481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.953 [2024-07-12 19:19:39.293523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.953 [2024-07-12 19:19:39.293552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.953 [2024-07-12 19:19:39.294122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.953 [2024-07-12 19:19:39.294310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.953 [2024-07-12 19:19:39.294320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.953 [2024-07-12 19:19:39.294326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.953 [2024-07-12 19:19:39.297079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.953 [2024-07-12 19:19:39.305982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.953 [2024-07-12 19:19:39.306378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.953 [2024-07-12 19:19:39.306394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.953 [2024-07-12 19:19:39.306400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.953 [2024-07-12 19:19:39.306562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.953 [2024-07-12 19:19:39.306726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.953 [2024-07-12 19:19:39.306734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.953 [2024-07-12 19:19:39.306740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.953 [2024-07-12 19:19:39.309561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.953 [2024-07-12 19:19:39.318886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.953 [2024-07-12 19:19:39.319291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.953 [2024-07-12 19:19:39.319308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.953 [2024-07-12 19:19:39.319315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.953 [2024-07-12 19:19:39.319487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.953 [2024-07-12 19:19:39.319659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.953 [2024-07-12 19:19:39.319669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.953 [2024-07-12 19:19:39.319675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.953 [2024-07-12 19:19:39.322458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.953 [2024-07-12 19:19:39.331879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.953 [2024-07-12 19:19:39.332236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.953 [2024-07-12 19:19:39.332252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.953 [2024-07-12 19:19:39.332260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.953 [2024-07-12 19:19:39.332424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.953 [2024-07-12 19:19:39.332587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.953 [2024-07-12 19:19:39.332596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.953 [2024-07-12 19:19:39.332606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.953 [2024-07-12 19:19:39.335303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.953 [2024-07-12 19:19:39.344732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.953 [2024-07-12 19:19:39.345133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.953 [2024-07-12 19:19:39.345175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.953 [2024-07-12 19:19:39.345197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.953 [2024-07-12 19:19:39.345668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.953 [2024-07-12 19:19:39.345842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.953 [2024-07-12 19:19:39.345852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.953 [2024-07-12 19:19:39.345858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.953 [2024-07-12 19:19:39.348597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.953 [2024-07-12 19:19:39.357568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.953 [2024-07-12 19:19:39.357879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.953 [2024-07-12 19:19:39.357895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.953 [2024-07-12 19:19:39.357903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.953 [2024-07-12 19:19:39.358065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.953 [2024-07-12 19:19:39.358234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.953 [2024-07-12 19:19:39.358244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.953 [2024-07-12 19:19:39.358250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.953 [2024-07-12 19:19:39.360936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.953 [2024-07-12 19:19:39.370446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.953 [2024-07-12 19:19:39.370840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.953 [2024-07-12 19:19:39.370856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.953 [2024-07-12 19:19:39.370864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.953 [2024-07-12 19:19:39.371028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.953 [2024-07-12 19:19:39.371191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.953 [2024-07-12 19:19:39.371200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.953 [2024-07-12 19:19:39.371206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.953 [2024-07-12 19:19:39.373905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.953 [2024-07-12 19:19:39.383264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.953 [2024-07-12 19:19:39.383660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.953 [2024-07-12 19:19:39.383675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.953 [2024-07-12 19:19:39.383682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.953 [2024-07-12 19:19:39.383845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.953 [2024-07-12 19:19:39.384007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.953 [2024-07-12 19:19:39.384016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.953 [2024-07-12 19:19:39.384022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.953 [2024-07-12 19:19:39.386620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.953 [2024-07-12 19:19:39.396115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.953 [2024-07-12 19:19:39.396458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.953 [2024-07-12 19:19:39.396501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.953 [2024-07-12 19:19:39.396523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.953 [2024-07-12 19:19:39.397102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.953 [2024-07-12 19:19:39.397710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.953 [2024-07-12 19:19:39.397720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.953 [2024-07-12 19:19:39.397726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.953 [2024-07-12 19:19:39.400372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.953 [2024-07-12 19:19:39.408963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.953 [2024-07-12 19:19:39.409388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.953 [2024-07-12 19:19:39.409405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.953 [2024-07-12 19:19:39.409413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.953 [2024-07-12 19:19:39.409575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.953 [2024-07-12 19:19:39.409739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.953 [2024-07-12 19:19:39.409748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.953 [2024-07-12 19:19:39.409755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.953 [2024-07-12 19:19:39.412516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.953 [2024-07-12 19:19:39.422003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.953 [2024-07-12 19:19:39.422337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.953 [2024-07-12 19:19:39.422356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.953 [2024-07-12 19:19:39.422364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.954 [2024-07-12 19:19:39.422532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.954 [2024-07-12 19:19:39.422695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.954 [2024-07-12 19:19:39.422705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.954 [2024-07-12 19:19:39.422711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.954 [2024-07-12 19:19:39.425387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.954 [2024-07-12 19:19:39.434997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.954 [2024-07-12 19:19:39.435347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.954 [2024-07-12 19:19:39.435364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.954 [2024-07-12 19:19:39.435372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.954 [2024-07-12 19:19:39.435545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.954 [2024-07-12 19:19:39.435718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.954 [2024-07-12 19:19:39.435727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.954 [2024-07-12 19:19:39.435733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.954 [2024-07-12 19:19:39.438383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.954 [2024-07-12 19:19:39.447908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.954 [2024-07-12 19:19:39.448275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.954 [2024-07-12 19:19:39.448318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.954 [2024-07-12 19:19:39.448341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.954 [2024-07-12 19:19:39.448811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.954 [2024-07-12 19:19:39.448975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.954 [2024-07-12 19:19:39.448985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.954 [2024-07-12 19:19:39.448991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.954 [2024-07-12 19:19:39.451628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.954 [2024-07-12 19:19:39.460919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.954 [2024-07-12 19:19:39.461252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.954 [2024-07-12 19:19:39.461269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.954 [2024-07-12 19:19:39.461276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.954 [2024-07-12 19:19:39.461439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.954 [2024-07-12 19:19:39.461604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.954 [2024-07-12 19:19:39.461613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.954 [2024-07-12 19:19:39.461623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.954 [2024-07-12 19:19:39.464300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.954 [2024-07-12 19:19:39.473859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.954 [2024-07-12 19:19:39.474252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.954 [2024-07-12 19:19:39.474268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.954 [2024-07-12 19:19:39.474275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.954 [2024-07-12 19:19:39.474437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.954 [2024-07-12 19:19:39.474601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.954 [2024-07-12 19:19:39.474609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.954 [2024-07-12 19:19:39.474615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.954 [2024-07-12 19:19:39.477348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.954 [2024-07-12 19:19:39.486647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.954 [2024-07-12 19:19:39.486972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.954 [2024-07-12 19:19:39.486988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.954 [2024-07-12 19:19:39.486995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.954 [2024-07-12 19:19:39.487159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.954 [2024-07-12 19:19:39.487346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.954 [2024-07-12 19:19:39.487356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.954 [2024-07-12 19:19:39.487363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.954 [2024-07-12 19:19:39.490030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.954 [2024-07-12 19:19:39.499527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.954 [2024-07-12 19:19:39.499841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.954 [2024-07-12 19:19:39.499857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.954 [2024-07-12 19:19:39.499863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.954 [2024-07-12 19:19:39.500026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.954 [2024-07-12 19:19:39.500188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.954 [2024-07-12 19:19:39.500197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.954 [2024-07-12 19:19:39.500203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.954 [2024-07-12 19:19:39.502896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:36.954 [2024-07-12 19:19:39.512523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.954 [2024-07-12 19:19:39.512950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.954 [2024-07-12 19:19:39.512970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:36.954 [2024-07-12 19:19:39.512977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:36.954 [2024-07-12 19:19:39.513148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:36.954 [2024-07-12 19:19:39.513325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.954 [2024-07-12 19:19:39.513336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.954 [2024-07-12 19:19:39.513342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.954 [2024-07-12 19:19:39.516095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.214 [2024-07-12 19:19:39.525549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.214 [2024-07-12 19:19:39.525915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.214 [2024-07-12 19:19:39.525931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.214 [2024-07-12 19:19:39.525939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.214 [2024-07-12 19:19:39.526102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.214 [2024-07-12 19:19:39.526287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.215 [2024-07-12 19:19:39.526297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.215 [2024-07-12 19:19:39.526303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.215 [2024-07-12 19:19:39.528977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.215 [2024-07-12 19:19:39.538454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.215 [2024-07-12 19:19:39.538877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.215 [2024-07-12 19:19:39.538893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.215 [2024-07-12 19:19:39.538901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.215 [2024-07-12 19:19:39.539073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.215 [2024-07-12 19:19:39.539251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.215 [2024-07-12 19:19:39.539261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.215 [2024-07-12 19:19:39.539267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.215 [2024-07-12 19:19:39.541977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.215 [2024-07-12 19:19:39.551264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.215 [2024-07-12 19:19:39.551688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.215 [2024-07-12 19:19:39.551730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.215 [2024-07-12 19:19:39.551753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.215 [2024-07-12 19:19:39.552152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.215 [2024-07-12 19:19:39.552344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.215 [2024-07-12 19:19:39.552354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.215 [2024-07-12 19:19:39.552361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.215 [2024-07-12 19:19:39.555139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.215 [2024-07-12 19:19:39.564201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.215 [2024-07-12 19:19:39.564640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.215 [2024-07-12 19:19:39.564683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.215 [2024-07-12 19:19:39.564704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.215 [2024-07-12 19:19:39.565215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.215 [2024-07-12 19:19:39.565407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.215 [2024-07-12 19:19:39.565417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.215 [2024-07-12 19:19:39.565423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.215 [2024-07-12 19:19:39.568267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.215 [2024-07-12 19:19:39.577129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.215 [2024-07-12 19:19:39.577572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.215 [2024-07-12 19:19:39.577589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.215 [2024-07-12 19:19:39.577596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.215 [2024-07-12 19:19:39.577759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.215 [2024-07-12 19:19:39.577921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.215 [2024-07-12 19:19:39.577930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.215 [2024-07-12 19:19:39.577936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.215 [2024-07-12 19:19:39.580575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.215 [2024-07-12 19:19:39.589916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.215 [2024-07-12 19:19:39.590318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.215 [2024-07-12 19:19:39.590361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.215 [2024-07-12 19:19:39.590384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.215 [2024-07-12 19:19:39.590949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.215 [2024-07-12 19:19:39.591122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.215 [2024-07-12 19:19:39.591132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.215 [2024-07-12 19:19:39.591138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.215 [2024-07-12 19:19:39.593820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.215 [2024-07-12 19:19:39.602756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.215 [2024-07-12 19:19:39.603108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.215 [2024-07-12 19:19:39.603124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.215 [2024-07-12 19:19:39.603130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.215 [2024-07-12 19:19:39.603315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.215 [2024-07-12 19:19:39.603489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.215 [2024-07-12 19:19:39.603498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.215 [2024-07-12 19:19:39.603504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.215 [2024-07-12 19:19:39.606166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.215 [2024-07-12 19:19:39.615620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.215 [2024-07-12 19:19:39.616036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.215 [2024-07-12 19:19:39.616052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.215 [2024-07-12 19:19:39.616059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.215 [2024-07-12 19:19:39.616221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.215 [2024-07-12 19:19:39.616414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.215 [2024-07-12 19:19:39.616429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.215 [2024-07-12 19:19:39.616435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.215 [2024-07-12 19:19:39.619104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.215 [2024-07-12 19:19:39.628537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.215 [2024-07-12 19:19:39.628933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.215 [2024-07-12 19:19:39.628949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.215 [2024-07-12 19:19:39.628956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.215 [2024-07-12 19:19:39.629118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.215 [2024-07-12 19:19:39.629304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.215 [2024-07-12 19:19:39.629315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.215 [2024-07-12 19:19:39.629320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.215 [2024-07-12 19:19:39.631992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.215 [2024-07-12 19:19:39.641481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.215 [2024-07-12 19:19:39.641814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.215 [2024-07-12 19:19:39.641831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.215 [2024-07-12 19:19:39.641844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.215 [2024-07-12 19:19:39.642016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.215 [2024-07-12 19:19:39.642192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.215 [2024-07-12 19:19:39.642202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.215 [2024-07-12 19:19:39.642208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.215 [2024-07-12 19:19:39.644907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.215 [2024-07-12 19:19:39.654298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.215 [2024-07-12 19:19:39.654640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.215 [2024-07-12 19:19:39.654656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.215 [2024-07-12 19:19:39.654663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.215 [2024-07-12 19:19:39.654825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.215 [2024-07-12 19:19:39.654988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.215 [2024-07-12 19:19:39.654997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.215 [2024-07-12 19:19:39.655003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.215 [2024-07-12 19:19:39.657700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.215 [2024-07-12 19:19:39.667209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.215 [2024-07-12 19:19:39.667552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.215 [2024-07-12 19:19:39.667567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.215 [2024-07-12 19:19:39.667575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.215 [2024-07-12 19:19:39.667739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.216 [2024-07-12 19:19:39.667902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.216 [2024-07-12 19:19:39.667911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.216 [2024-07-12 19:19:39.667917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.216 [2024-07-12 19:19:39.670611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.216 [2024-07-12 19:19:39.680004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.216 [2024-07-12 19:19:39.680361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.216 [2024-07-12 19:19:39.680377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.216 [2024-07-12 19:19:39.680384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.216 [2024-07-12 19:19:39.680547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.216 [2024-07-12 19:19:39.680710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.216 [2024-07-12 19:19:39.680722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.216 [2024-07-12 19:19:39.680728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.216 [2024-07-12 19:19:39.683483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.216 [2024-07-12 19:19:39.692878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.216 [2024-07-12 19:19:39.693290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.216 [2024-07-12 19:19:39.693330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.216 [2024-07-12 19:19:39.693354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.216 [2024-07-12 19:19:39.693933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.216 [2024-07-12 19:19:39.694527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.216 [2024-07-12 19:19:39.694552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.216 [2024-07-12 19:19:39.694558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.216 [2024-07-12 19:19:39.697214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.216 [2024-07-12 19:19:39.705747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.216 [2024-07-12 19:19:39.706095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.216 [2024-07-12 19:19:39.706137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.216 [2024-07-12 19:19:39.706159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.216 [2024-07-12 19:19:39.706628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.216 [2024-07-12 19:19:39.706802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.216 [2024-07-12 19:19:39.706812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.216 [2024-07-12 19:19:39.706819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.216 [2024-07-12 19:19:39.709473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.216 [2024-07-12 19:19:39.718610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.216 [2024-07-12 19:19:39.719033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.216 [2024-07-12 19:19:39.719074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.216 [2024-07-12 19:19:39.719097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.216 [2024-07-12 19:19:39.719524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.216 [2024-07-12 19:19:39.719698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.216 [2024-07-12 19:19:39.719708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.216 [2024-07-12 19:19:39.719714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.216 [2024-07-12 19:19:39.722366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.216 [2024-07-12 19:19:39.731502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.216 [2024-07-12 19:19:39.731920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.216 [2024-07-12 19:19:39.731936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.216 [2024-07-12 19:19:39.731942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.216 [2024-07-12 19:19:39.732105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.216 [2024-07-12 19:19:39.732274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.216 [2024-07-12 19:19:39.732300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.216 [2024-07-12 19:19:39.732306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.216 [2024-07-12 19:19:39.734982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.216 [2024-07-12 19:19:39.744420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.216 [2024-07-12 19:19:39.744829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.216 [2024-07-12 19:19:39.744870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.216 [2024-07-12 19:19:39.744892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.216 [2024-07-12 19:19:39.745373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.216 [2024-07-12 19:19:39.745548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.216 [2024-07-12 19:19:39.745558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.216 [2024-07-12 19:19:39.745564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.216 [2024-07-12 19:19:39.748251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.216 [2024-07-12 19:19:39.757302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.216 [2024-07-12 19:19:39.757689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.216 [2024-07-12 19:19:39.757705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.216 [2024-07-12 19:19:39.757712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.216 [2024-07-12 19:19:39.757875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.216 [2024-07-12 19:19:39.758039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.216 [2024-07-12 19:19:39.758047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.216 [2024-07-12 19:19:39.758053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.216 [2024-07-12 19:19:39.760746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.216 [2024-07-12 19:19:39.770127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.216 [2024-07-12 19:19:39.770520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.216 [2024-07-12 19:19:39.770536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.216 [2024-07-12 19:19:39.770543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.216 [2024-07-12 19:19:39.770710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.216 [2024-07-12 19:19:39.770873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.216 [2024-07-12 19:19:39.770882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.216 [2024-07-12 19:19:39.770888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.216 [2024-07-12 19:19:39.773581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.476 [2024-07-12 19:19:39.783196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.476 [2024-07-12 19:19:39.783550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.476 [2024-07-12 19:19:39.783566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.476 [2024-07-12 19:19:39.783574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.476 [2024-07-12 19:19:39.783736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.476 [2024-07-12 19:19:39.783901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.476 [2024-07-12 19:19:39.783909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.476 [2024-07-12 19:19:39.783915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.476 [2024-07-12 19:19:39.786665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.476 [2024-07-12 19:19:39.796188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.476 [2024-07-12 19:19:39.796594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.476 [2024-07-12 19:19:39.796611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.476 [2024-07-12 19:19:39.796619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.476 [2024-07-12 19:19:39.796782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.476 [2024-07-12 19:19:39.796945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.476 [2024-07-12 19:19:39.796955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.476 [2024-07-12 19:19:39.796963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.476 [2024-07-12 19:19:39.799560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.476 [2024-07-12 19:19:39.809212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.476 [2024-07-12 19:19:39.809764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.476 [2024-07-12 19:19:39.809782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.476 [2024-07-12 19:19:39.809789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.476 [2024-07-12 19:19:39.809967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.476 [2024-07-12 19:19:39.810145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.476 [2024-07-12 19:19:39.810155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.476 [2024-07-12 19:19:39.810166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.476 [2024-07-12 19:19:39.813058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.476 [2024-07-12 19:19:39.822427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.476 [2024-07-12 19:19:39.822855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.476 [2024-07-12 19:19:39.822898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.476 [2024-07-12 19:19:39.822921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.476 [2024-07-12 19:19:39.823515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.476 [2024-07-12 19:19:39.823940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.476 [2024-07-12 19:19:39.823950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.476 [2024-07-12 19:19:39.823956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.476 [2024-07-12 19:19:39.826807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.476 [2024-07-12 19:19:39.835431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.476 [2024-07-12 19:19:39.835825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.476 [2024-07-12 19:19:39.835842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.476 [2024-07-12 19:19:39.835850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.476 [2024-07-12 19:19:39.836023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.476 [2024-07-12 19:19:39.836197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.476 [2024-07-12 19:19:39.836206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.476 [2024-07-12 19:19:39.836213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.476 [2024-07-12 19:19:39.838888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.476 [2024-07-12 19:19:39.848355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.476 [2024-07-12 19:19:39.848775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.476 [2024-07-12 19:19:39.848792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.476 [2024-07-12 19:19:39.848800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.477 [2024-07-12 19:19:39.848973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.477 [2024-07-12 19:19:39.849146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.477 [2024-07-12 19:19:39.849156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.477 [2024-07-12 19:19:39.849162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.477 [2024-07-12 19:19:39.851920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.477 [2024-07-12 19:19:39.861202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.477 [2024-07-12 19:19:39.861551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.477 [2024-07-12 19:19:39.861570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.477 [2024-07-12 19:19:39.861578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.477 [2024-07-12 19:19:39.861741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.477 [2024-07-12 19:19:39.861904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.477 [2024-07-12 19:19:39.861914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.477 [2024-07-12 19:19:39.861920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.477 [2024-07-12 19:19:39.864613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.477 [2024-07-12 19:19:39.874055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.477 [2024-07-12 19:19:39.874473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.477 [2024-07-12 19:19:39.874520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.477 [2024-07-12 19:19:39.874543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.477 [2024-07-12 19:19:39.875048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.477 [2024-07-12 19:19:39.875213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.477 [2024-07-12 19:19:39.875222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.477 [2024-07-12 19:19:39.875233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.477 [2024-07-12 19:19:39.877922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.477 [2024-07-12 19:19:39.886961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.477 [2024-07-12 19:19:39.887274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.477 [2024-07-12 19:19:39.887291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.477 [2024-07-12 19:19:39.887298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.477 [2024-07-12 19:19:39.887462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.477 [2024-07-12 19:19:39.887625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.477 [2024-07-12 19:19:39.887634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.477 [2024-07-12 19:19:39.887640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.477 [2024-07-12 19:19:39.890332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.477 [2024-07-12 19:19:39.899887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.477 [2024-07-12 19:19:39.900280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.477 [2024-07-12 19:19:39.900296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.477 [2024-07-12 19:19:39.900304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.477 [2024-07-12 19:19:39.900465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.477 [2024-07-12 19:19:39.900631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.477 [2024-07-12 19:19:39.900640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.477 [2024-07-12 19:19:39.900646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.477 [2024-07-12 19:19:39.903379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.477 [2024-07-12 19:19:39.912800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.477 [2024-07-12 19:19:39.913199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.477 [2024-07-12 19:19:39.913215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.477 [2024-07-12 19:19:39.913222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.477 [2024-07-12 19:19:39.913414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.477 [2024-07-12 19:19:39.913587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.477 [2024-07-12 19:19:39.913596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.477 [2024-07-12 19:19:39.913603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.477 [2024-07-12 19:19:39.916307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.477 [2024-07-12 19:19:39.925642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.477 [2024-07-12 19:19:39.926061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.477 [2024-07-12 19:19:39.926102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.477 [2024-07-12 19:19:39.926124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.477 [2024-07-12 19:19:39.926721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.477 [2024-07-12 19:19:39.926910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.477 [2024-07-12 19:19:39.926921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.477 [2024-07-12 19:19:39.926928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.477 [2024-07-12 19:19:39.929566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.477 [2024-07-12 19:19:39.938559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.477 [2024-07-12 19:19:39.938982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.477 [2024-07-12 19:19:39.939024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.477 [2024-07-12 19:19:39.939047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.477 [2024-07-12 19:19:39.939641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.477 [2024-07-12 19:19:39.940175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.477 [2024-07-12 19:19:39.940192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.477 [2024-07-12 19:19:39.940206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.477 [2024-07-12 19:19:39.946455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.477 [2024-07-12 19:19:39.953446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.477 [2024-07-12 19:19:39.953948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.477 [2024-07-12 19:19:39.953969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.477 [2024-07-12 19:19:39.953980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.477 [2024-07-12 19:19:39.954242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.477 [2024-07-12 19:19:39.954500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.477 [2024-07-12 19:19:39.954512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.477 [2024-07-12 19:19:39.954522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.477 [2024-07-12 19:19:39.958590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.477 [2024-07-12 19:19:39.966624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.477 [2024-07-12 19:19:39.967054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.477 [2024-07-12 19:19:39.967071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.477 [2024-07-12 19:19:39.967078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.477 [2024-07-12 19:19:39.967261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.477 [2024-07-12 19:19:39.967439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.477 [2024-07-12 19:19:39.967449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.477 [2024-07-12 19:19:39.967455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.477 [2024-07-12 19:19:39.970290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.477 [2024-07-12 19:19:39.979822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.477 [2024-07-12 19:19:39.980237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.477 [2024-07-12 19:19:39.980254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.477 [2024-07-12 19:19:39.980262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.477 [2024-07-12 19:19:39.980440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.477 [2024-07-12 19:19:39.980619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.477 [2024-07-12 19:19:39.980629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.477 [2024-07-12 19:19:39.980635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.477 [2024-07-12 19:19:39.983469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.477 [2024-07-12 19:19:39.993008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.477 [2024-07-12 19:19:39.993434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.477 [2024-07-12 19:19:39.993452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.477 [2024-07-12 19:19:39.993463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.477 [2024-07-12 19:19:39.993641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.477 [2024-07-12 19:19:39.993820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.477 [2024-07-12 19:19:39.993830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.477 [2024-07-12 19:19:39.993837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.477 [2024-07-12 19:19:39.996673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.477 [2024-07-12 19:19:40.006077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.477 [2024-07-12 19:19:40.006514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.477 [2024-07-12 19:19:40.006532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.478 [2024-07-12 19:19:40.006539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.478 [2024-07-12 19:19:40.006717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.478 [2024-07-12 19:19:40.006895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.478 [2024-07-12 19:19:40.006904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.478 [2024-07-12 19:19:40.006911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.478 [2024-07-12 19:19:40.009762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.478 [2024-07-12 19:19:40.019159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.478 [2024-07-12 19:19:40.019576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.478 [2024-07-12 19:19:40.019594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.478 [2024-07-12 19:19:40.019602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.478 [2024-07-12 19:19:40.019780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.478 [2024-07-12 19:19:40.019959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.478 [2024-07-12 19:19:40.019970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.478 [2024-07-12 19:19:40.019976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.478 [2024-07-12 19:19:40.022828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.478 [2024-07-12 19:19:40.033026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.478 [2024-07-12 19:19:40.033397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.478 [2024-07-12 19:19:40.033416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.478 [2024-07-12 19:19:40.033425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.478 [2024-07-12 19:19:40.033604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.478 [2024-07-12 19:19:40.033783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.478 [2024-07-12 19:19:40.033797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.478 [2024-07-12 19:19:40.033804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.478 [2024-07-12 19:19:40.036649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.739 [2024-07-12 19:19:40.046200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.739 [2024-07-12 19:19:40.046618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.739 [2024-07-12 19:19:40.046637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.739 [2024-07-12 19:19:40.046645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.739 [2024-07-12 19:19:40.046823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.739 [2024-07-12 19:19:40.047000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.739 [2024-07-12 19:19:40.047010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.739 [2024-07-12 19:19:40.047016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.739 [2024-07-12 19:19:40.049853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.739 [2024-07-12 19:19:40.059266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.739 [2024-07-12 19:19:40.059656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.739 [2024-07-12 19:19:40.059673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.739 [2024-07-12 19:19:40.059681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.739 [2024-07-12 19:19:40.059859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.739 [2024-07-12 19:19:40.060037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.739 [2024-07-12 19:19:40.060047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.739 [2024-07-12 19:19:40.060053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.739 [2024-07-12 19:19:40.062897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.739 [2024-07-12 19:19:40.072336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.739 [2024-07-12 19:19:40.072700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.739 [2024-07-12 19:19:40.072717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.739 [2024-07-12 19:19:40.072725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.739 [2024-07-12 19:19:40.072896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.739 [2024-07-12 19:19:40.073070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.739 [2024-07-12 19:19:40.073079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.739 [2024-07-12 19:19:40.073086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.739 [2024-07-12 19:19:40.075937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.739 [2024-07-12 19:19:40.085479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.739 [2024-07-12 19:19:40.085755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.739 [2024-07-12 19:19:40.085772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.739 [2024-07-12 19:19:40.085780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.739 [2024-07-12 19:19:40.085952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.739 [2024-07-12 19:19:40.086125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.739 [2024-07-12 19:19:40.086135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.739 [2024-07-12 19:19:40.086141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.739 [2024-07-12 19:19:40.088969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.739 [2024-07-12 19:19:40.098442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.739 [2024-07-12 19:19:40.098856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.739 [2024-07-12 19:19:40.098873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.739 [2024-07-12 19:19:40.098880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.739 [2024-07-12 19:19:40.099044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.739 [2024-07-12 19:19:40.099208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.739 [2024-07-12 19:19:40.099217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.739 [2024-07-12 19:19:40.099223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.739 [2024-07-12 19:19:40.101880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.739 [2024-07-12 19:19:40.111486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.739 [2024-07-12 19:19:40.111884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.739 [2024-07-12 19:19:40.111901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.739 [2024-07-12 19:19:40.111908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.739 [2024-07-12 19:19:40.112070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.739 [2024-07-12 19:19:40.112240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.739 [2024-07-12 19:19:40.112250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.739 [2024-07-12 19:19:40.112256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.739 [2024-07-12 19:19:40.115008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.740 [2024-07-12 19:19:40.124392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.740 [2024-07-12 19:19:40.124807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.740 [2024-07-12 19:19:40.124823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.740 [2024-07-12 19:19:40.124831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.740 [2024-07-12 19:19:40.124998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.740 [2024-07-12 19:19:40.125161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.740 [2024-07-12 19:19:40.125170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.740 [2024-07-12 19:19:40.125176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.740 [2024-07-12 19:19:40.127875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.740 [2024-07-12 19:19:40.137293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.740 [2024-07-12 19:19:40.137633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.740 [2024-07-12 19:19:40.137650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.740 [2024-07-12 19:19:40.137657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.740 [2024-07-12 19:19:40.137820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.740 [2024-07-12 19:19:40.137983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.740 [2024-07-12 19:19:40.137992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.740 [2024-07-12 19:19:40.137998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.740 [2024-07-12 19:19:40.140749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.740 [2024-07-12 19:19:40.150328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.740 [2024-07-12 19:19:40.150619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.740 [2024-07-12 19:19:40.150657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.740 [2024-07-12 19:19:40.150681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.740 [2024-07-12 19:19:40.151201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.740 [2024-07-12 19:19:40.151372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.740 [2024-07-12 19:19:40.151381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.740 [2024-07-12 19:19:40.151387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.740 [2024-07-12 19:19:40.154049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.740 [2024-07-12 19:19:40.163343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.740 [2024-07-12 19:19:40.163756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.740 [2024-07-12 19:19:40.163794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.740 [2024-07-12 19:19:40.163817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.740 [2024-07-12 19:19:40.164358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.740 [2024-07-12 19:19:40.164523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.740 [2024-07-12 19:19:40.164532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.740 [2024-07-12 19:19:40.164542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.740 [2024-07-12 19:19:40.167243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.740 [2024-07-12 19:19:40.176270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.740 [2024-07-12 19:19:40.176547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.740 [2024-07-12 19:19:40.176563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.740 [2024-07-12 19:19:40.176570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.740 [2024-07-12 19:19:40.176733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.740 [2024-07-12 19:19:40.176897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.740 [2024-07-12 19:19:40.176906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.740 [2024-07-12 19:19:40.176913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.740 [2024-07-12 19:19:40.179614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.740 [2024-07-12 19:19:40.189160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.740 [2024-07-12 19:19:40.189458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.740 [2024-07-12 19:19:40.189475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.740 [2024-07-12 19:19:40.189483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.740 [2024-07-12 19:19:40.189655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.740 [2024-07-12 19:19:40.189828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.740 [2024-07-12 19:19:40.189837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.740 [2024-07-12 19:19:40.189843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.740 [2024-07-12 19:19:40.192623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.740 [2024-07-12 19:19:40.202109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.740 [2024-07-12 19:19:40.202539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.740 [2024-07-12 19:19:40.202581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.740 [2024-07-12 19:19:40.202606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.740 [2024-07-12 19:19:40.203190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.740 [2024-07-12 19:19:40.203713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.740 [2024-07-12 19:19:40.203725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.740 [2024-07-12 19:19:40.203732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.740 [2024-07-12 19:19:40.206408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.740 [2024-07-12 19:19:40.215100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.740 [2024-07-12 19:19:40.215500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.740 [2024-07-12 19:19:40.215519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.740 [2024-07-12 19:19:40.215526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.740 [2024-07-12 19:19:40.215689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.740 [2024-07-12 19:19:40.215852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.740 [2024-07-12 19:19:40.215861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.740 [2024-07-12 19:19:40.215867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.740 [2024-07-12 19:19:40.218623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.740 [2024-07-12 19:19:40.228062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.740 [2024-07-12 19:19:40.228445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.740 [2024-07-12 19:19:40.228487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.740 [2024-07-12 19:19:40.228509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.740 [2024-07-12 19:19:40.229087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.740 [2024-07-12 19:19:40.229673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.740 [2024-07-12 19:19:40.229683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.740 [2024-07-12 19:19:40.229690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.740 [2024-07-12 19:19:40.232408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.740 [2024-07-12 19:19:40.240912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.740 [2024-07-12 19:19:40.241262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.740 [2024-07-12 19:19:40.241279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.740 [2024-07-12 19:19:40.241286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.740 [2024-07-12 19:19:40.241458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.740 [2024-07-12 19:19:40.241634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.740 [2024-07-12 19:19:40.241643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.740 [2024-07-12 19:19:40.241649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.740 [2024-07-12 19:19:40.244372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.740 [2024-07-12 19:19:40.253862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.740 [2024-07-12 19:19:40.254281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.740 [2024-07-12 19:19:40.254298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.740 [2024-07-12 19:19:40.254305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.740 [2024-07-12 19:19:40.254468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.740 [2024-07-12 19:19:40.254635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.740 [2024-07-12 19:19:40.254645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.740 [2024-07-12 19:19:40.254651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.741 [2024-07-12 19:19:40.257335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.741 [2024-07-12 19:19:40.266951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.741 [2024-07-12 19:19:40.267361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.741 [2024-07-12 19:19:40.267377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.741 [2024-07-12 19:19:40.267385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.741 [2024-07-12 19:19:40.267547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.741 [2024-07-12 19:19:40.267710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.741 [2024-07-12 19:19:40.267720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.741 [2024-07-12 19:19:40.267725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.741 [2024-07-12 19:19:40.270413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.741 [2024-07-12 19:19:40.279899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.741 [2024-07-12 19:19:40.280308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.741 [2024-07-12 19:19:40.280325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.741 [2024-07-12 19:19:40.280332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.741 [2024-07-12 19:19:40.280494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.741 [2024-07-12 19:19:40.280658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.741 [2024-07-12 19:19:40.280667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.741 [2024-07-12 19:19:40.280673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.741 [2024-07-12 19:19:40.283350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.741 [2024-07-12 19:19:40.292841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.741 [2024-07-12 19:19:40.293241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.741 [2024-07-12 19:19:40.293258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:37.741 [2024-07-12 19:19:40.293266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:37.741 [2024-07-12 19:19:40.293438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:37.741 [2024-07-12 19:19:40.293611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.741 [2024-07-12 19:19:40.293620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.741 [2024-07-12 19:19:40.293626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.741 [2024-07-12 19:19:40.296333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.002 [2024-07-12 19:19:40.305960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.002 [2024-07-12 19:19:40.306377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.002 [2024-07-12 19:19:40.306395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.002 [2024-07-12 19:19:40.306403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.002 [2024-07-12 19:19:40.306586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.002 [2024-07-12 19:19:40.306760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.002 [2024-07-12 19:19:40.306769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.002 [2024-07-12 19:19:40.306776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.002 [2024-07-12 19:19:40.309475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.002 [2024-07-12 19:19:40.318989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.002 [2024-07-12 19:19:40.319350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.002 [2024-07-12 19:19:40.319368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.002 [2024-07-12 19:19:40.319376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.002 [2024-07-12 19:19:40.319553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.002 [2024-07-12 19:19:40.319716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.002 [2024-07-12 19:19:40.319726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.002 [2024-07-12 19:19:40.319732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.002 [2024-07-12 19:19:40.322364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.002 [2024-07-12 19:19:40.331921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.002 [2024-07-12 19:19:40.332213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.002 [2024-07-12 19:19:40.332237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.002 [2024-07-12 19:19:40.332245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.002 [2024-07-12 19:19:40.332417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.002 [2024-07-12 19:19:40.332590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.002 [2024-07-12 19:19:40.332599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.002 [2024-07-12 19:19:40.332606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.002 [2024-07-12 19:19:40.335429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.002 [2024-07-12 19:19:40.344923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.002 [2024-07-12 19:19:40.345222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.002 [2024-07-12 19:19:40.345246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.002 [2024-07-12 19:19:40.345258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.002 [2024-07-12 19:19:40.345430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.002 [2024-07-12 19:19:40.345608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.002 [2024-07-12 19:19:40.345618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.002 [2024-07-12 19:19:40.345623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.002 [2024-07-12 19:19:40.348231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.002 [2024-07-12 19:19:40.357906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.002 [2024-07-12 19:19:40.358347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.002 [2024-07-12 19:19:40.358391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.002 [2024-07-12 19:19:40.358413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.002 [2024-07-12 19:19:40.358993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.002 [2024-07-12 19:19:40.359200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.002 [2024-07-12 19:19:40.359210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.002 [2024-07-12 19:19:40.359216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.002 [2024-07-12 19:19:40.361865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.002 [2024-07-12 19:19:40.371027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.002 [2024-07-12 19:19:40.371460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.002 [2024-07-12 19:19:40.371477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.002 [2024-07-12 19:19:40.371484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.002 [2024-07-12 19:19:40.371656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.002 [2024-07-12 19:19:40.371847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.002 [2024-07-12 19:19:40.371856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.002 [2024-07-12 19:19:40.371863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.002 [2024-07-12 19:19:40.374669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.002 [2024-07-12 19:19:40.383976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.002 [2024-07-12 19:19:40.384379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.002 [2024-07-12 19:19:40.384422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.002 [2024-07-12 19:19:40.384445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.002 [2024-07-12 19:19:40.384841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.002 [2024-07-12 19:19:40.385006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.002 [2024-07-12 19:19:40.385019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.002 [2024-07-12 19:19:40.385025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.003 [2024-07-12 19:19:40.390585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.003 [2024-07-12 19:19:40.399141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.003 [2024-07-12 19:19:40.399609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.003 [2024-07-12 19:19:40.399631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.003 [2024-07-12 19:19:40.399641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.003 [2024-07-12 19:19:40.399895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.003 [2024-07-12 19:19:40.400149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.003 [2024-07-12 19:19:40.400162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.003 [2024-07-12 19:19:40.400171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.003 [2024-07-12 19:19:40.404240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.003 [2024-07-12 19:19:40.412095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.003 [2024-07-12 19:19:40.412506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.003 [2024-07-12 19:19:40.412522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.003 [2024-07-12 19:19:40.412529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.003 [2024-07-12 19:19:40.412695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.003 [2024-07-12 19:19:40.412863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.003 [2024-07-12 19:19:40.412888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.003 [2024-07-12 19:19:40.412895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.003 [2024-07-12 19:19:40.415842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.003 [2024-07-12 19:19:40.425042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.003 [2024-07-12 19:19:40.425373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.003 [2024-07-12 19:19:40.425390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.003 [2024-07-12 19:19:40.425397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.003 [2024-07-12 19:19:40.425561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.003 [2024-07-12 19:19:40.425725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.003 [2024-07-12 19:19:40.425734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.003 [2024-07-12 19:19:40.425740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.003 [2024-07-12 19:19:40.428433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.003 [2024-07-12 19:19:40.437971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.003 [2024-07-12 19:19:40.438383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.003 [2024-07-12 19:19:40.438431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.003 [2024-07-12 19:19:40.438454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.003 [2024-07-12 19:19:40.439032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.003 [2024-07-12 19:19:40.439255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.003 [2024-07-12 19:19:40.439264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.003 [2024-07-12 19:19:40.439270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.003 [2024-07-12 19:19:40.442019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.003 [2024-07-12 19:19:40.451055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.003 [2024-07-12 19:19:40.451481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.003 [2024-07-12 19:19:40.451498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.003 [2024-07-12 19:19:40.451505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.003 [2024-07-12 19:19:40.451669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.003 [2024-07-12 19:19:40.451832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.003 [2024-07-12 19:19:40.451842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.003 [2024-07-12 19:19:40.451848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.003 [2024-07-12 19:19:40.454555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.003 [2024-07-12 19:19:40.463956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.003 [2024-07-12 19:19:40.464374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.003 [2024-07-12 19:19:40.464391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.003 [2024-07-12 19:19:40.464398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.003 [2024-07-12 19:19:40.464562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.003 [2024-07-12 19:19:40.464726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.003 [2024-07-12 19:19:40.464735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.003 [2024-07-12 19:19:40.464741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.003 [2024-07-12 19:19:40.467489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.003 [2024-07-12 19:19:40.476844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.003 [2024-07-12 19:19:40.477236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.003 [2024-07-12 19:19:40.477252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.003 [2024-07-12 19:19:40.477259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.003 [2024-07-12 19:19:40.477427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.003 [2024-07-12 19:19:40.477590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.003 [2024-07-12 19:19:40.477599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.003 [2024-07-12 19:19:40.477605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.003 [2024-07-12 19:19:40.480298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.003 [2024-07-12 19:19:40.489735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.003 [2024-07-12 19:19:40.490150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.003 [2024-07-12 19:19:40.490167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.003 [2024-07-12 19:19:40.490175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.003 [2024-07-12 19:19:40.490365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.003 [2024-07-12 19:19:40.490539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.003 [2024-07-12 19:19:40.490549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.003 [2024-07-12 19:19:40.490555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.003 [2024-07-12 19:19:40.493268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.003 [2024-07-12 19:19:40.502670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.003 [2024-07-12 19:19:40.503020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.003 [2024-07-12 19:19:40.503036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.003 [2024-07-12 19:19:40.503043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.003 [2024-07-12 19:19:40.503207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.003 [2024-07-12 19:19:40.503377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.003 [2024-07-12 19:19:40.503386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.003 [2024-07-12 19:19:40.503392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.003 [2024-07-12 19:19:40.506029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.003 [2024-07-12 19:19:40.515587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.003 [2024-07-12 19:19:40.516005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.003 [2024-07-12 19:19:40.516021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.003 [2024-07-12 19:19:40.516028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.003 [2024-07-12 19:19:40.516192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.003 [2024-07-12 19:19:40.516385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.003 [2024-07-12 19:19:40.516395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.003 [2024-07-12 19:19:40.516405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.003 [2024-07-12 19:19:40.519092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.003 [2024-07-12 19:19:40.528567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.003 [2024-07-12 19:19:40.528988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.003 [2024-07-12 19:19:40.529006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.003 [2024-07-12 19:19:40.529013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.003 [2024-07-12 19:19:40.529177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.003 [2024-07-12 19:19:40.529345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.004 [2024-07-12 19:19:40.529356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.004 [2024-07-12 19:19:40.529362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.004 [2024-07-12 19:19:40.532060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.004 [2024-07-12 19:19:40.541508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.004 [2024-07-12 19:19:40.541907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.004 [2024-07-12 19:19:40.541923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.004 [2024-07-12 19:19:40.541930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.004 [2024-07-12 19:19:40.542093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.004 [2024-07-12 19:19:40.542262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.004 [2024-07-12 19:19:40.542271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.004 [2024-07-12 19:19:40.542278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.004 [2024-07-12 19:19:40.544974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.004 [2024-07-12 19:19:40.554480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.004 [2024-07-12 19:19:40.554896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.004 [2024-07-12 19:19:40.554912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.004 [2024-07-12 19:19:40.554920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.004 [2024-07-12 19:19:40.555083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.004 [2024-07-12 19:19:40.555269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.004 [2024-07-12 19:19:40.555279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.004 [2024-07-12 19:19:40.555286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.004 [2024-07-12 19:19:40.557962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.004 [2024-07-12 19:19:40.567589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.004 [2024-07-12 19:19:40.568014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.004 [2024-07-12 19:19:40.568034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.004 [2024-07-12 19:19:40.568042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.264 [2024-07-12 19:19:40.568215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.264 [2024-07-12 19:19:40.568396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.264 [2024-07-12 19:19:40.568406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.264 [2024-07-12 19:19:40.568412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.264 [2024-07-12 19:19:40.571142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.264 [2024-07-12 19:19:40.580522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.264 [2024-07-12 19:19:40.580926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.264 [2024-07-12 19:19:40.580942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.264 [2024-07-12 19:19:40.580949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.264 [2024-07-12 19:19:40.581112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.264 [2024-07-12 19:19:40.581301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.264 [2024-07-12 19:19:40.581311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.264 [2024-07-12 19:19:40.581317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.264 [2024-07-12 19:19:40.583993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.264 [2024-07-12 19:19:40.593651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.264 [2024-07-12 19:19:40.594059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.264 [2024-07-12 19:19:40.594076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.264 [2024-07-12 19:19:40.594084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.264 [2024-07-12 19:19:40.594266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.264 [2024-07-12 19:19:40.594453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.264 [2024-07-12 19:19:40.594462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.264 [2024-07-12 19:19:40.594469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.264 [2024-07-12 19:19:40.597208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.264 [2024-07-12 19:19:40.606617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.264 [2024-07-12 19:19:40.607016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.264 [2024-07-12 19:19:40.607032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.264 [2024-07-12 19:19:40.607039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.264 [2024-07-12 19:19:40.607202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.264 [2024-07-12 19:19:40.607403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.264 [2024-07-12 19:19:40.607413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.264 [2024-07-12 19:19:40.607419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.264 [2024-07-12 19:19:40.610090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.264 [2024-07-12 19:19:40.619625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.264 [2024-07-12 19:19:40.620036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.264 [2024-07-12 19:19:40.620070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.264 [2024-07-12 19:19:40.620094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.264 [2024-07-12 19:19:40.620664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.264 [2024-07-12 19:19:40.620829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.264 [2024-07-12 19:19:40.620838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.264 [2024-07-12 19:19:40.620845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.264 [2024-07-12 19:19:40.623538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.264 [2024-07-12 19:19:40.632446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.264 [2024-07-12 19:19:40.632857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.264 [2024-07-12 19:19:40.632898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.264 [2024-07-12 19:19:40.632921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.264 [2024-07-12 19:19:40.633469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.264 [2024-07-12 19:19:40.633635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.264 [2024-07-12 19:19:40.633644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.264 [2024-07-12 19:19:40.633651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.264 [2024-07-12 19:19:40.636251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.264 [2024-07-12 19:19:40.645355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.264 [2024-07-12 19:19:40.645779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.264 [2024-07-12 19:19:40.645820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.264 [2024-07-12 19:19:40.645844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.264 [2024-07-12 19:19:40.646439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.264 [2024-07-12 19:19:40.646969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.264 [2024-07-12 19:19:40.646978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.264 [2024-07-12 19:19:40.646984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.264 [2024-07-12 19:19:40.649691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.264 [2024-07-12 19:19:40.658255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.264 [2024-07-12 19:19:40.658663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.264 [2024-07-12 19:19:40.658679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.264 [2024-07-12 19:19:40.658687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.264 [2024-07-12 19:19:40.658848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.264 [2024-07-12 19:19:40.659012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.264 [2024-07-12 19:19:40.659021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.264 [2024-07-12 19:19:40.659026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.264 [2024-07-12 19:19:40.661628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.264 [2024-07-12 19:19:40.671135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.264 [2024-07-12 19:19:40.671558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.264 [2024-07-12 19:19:40.671600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.264 [2024-07-12 19:19:40.671622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.265 [2024-07-12 19:19:40.672036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.265 [2024-07-12 19:19:40.672201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.265 [2024-07-12 19:19:40.672210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.265 [2024-07-12 19:19:40.672216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.265 [2024-07-12 19:19:40.674908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.265 [2024-07-12 19:19:40.684096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.265 [2024-07-12 19:19:40.684525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.265 [2024-07-12 19:19:40.684541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.265 [2024-07-12 19:19:40.684549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.265 [2024-07-12 19:19:40.684720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.265 [2024-07-12 19:19:40.684898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.265 [2024-07-12 19:19:40.684909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.265 [2024-07-12 19:19:40.684915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.265 [2024-07-12 19:19:40.687622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.265 [2024-07-12 19:19:40.696963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.265 [2024-07-12 19:19:40.697383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.265 [2024-07-12 19:19:40.697427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.265 [2024-07-12 19:19:40.697457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.265 [2024-07-12 19:19:40.697988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.265 [2024-07-12 19:19:40.698383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.265 [2024-07-12 19:19:40.698402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.265 [2024-07-12 19:19:40.698416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.265 [2024-07-12 19:19:40.704655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.265 [2024-07-12 19:19:40.711778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.265 [2024-07-12 19:19:40.712288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.265 [2024-07-12 19:19:40.712310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.265 [2024-07-12 19:19:40.712320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.265 [2024-07-12 19:19:40.712573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.265 [2024-07-12 19:19:40.712827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.265 [2024-07-12 19:19:40.712839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.265 [2024-07-12 19:19:40.712847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.265 [2024-07-12 19:19:40.716908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.265 [2024-07-12 19:19:40.724660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.265 [2024-07-12 19:19:40.725054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.265 [2024-07-12 19:19:40.725071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.265 [2024-07-12 19:19:40.725079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.265 [2024-07-12 19:19:40.725252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.265 [2024-07-12 19:19:40.725440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.265 [2024-07-12 19:19:40.725450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.265 [2024-07-12 19:19:40.725457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.265 [2024-07-12 19:19:40.728167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.265 [2024-07-12 19:19:40.737582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.265 [2024-07-12 19:19:40.737977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.265 [2024-07-12 19:19:40.738015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.265 [2024-07-12 19:19:40.738038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.265 [2024-07-12 19:19:40.738631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.265 [2024-07-12 19:19:40.739164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.265 [2024-07-12 19:19:40.739176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.265 [2024-07-12 19:19:40.739182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.265 [2024-07-12 19:19:40.741812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.265 [2024-07-12 19:19:40.750522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.265 [2024-07-12 19:19:40.750935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.265 [2024-07-12 19:19:40.750972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.265 [2024-07-12 19:19:40.750996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.265 [2024-07-12 19:19:40.751590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.265 [2024-07-12 19:19:40.752164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.265 [2024-07-12 19:19:40.752173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.265 [2024-07-12 19:19:40.752179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.265 [2024-07-12 19:19:40.754806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.265 [2024-07-12 19:19:40.763329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.265 [2024-07-12 19:19:40.763663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.265 [2024-07-12 19:19:40.763680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.265 [2024-07-12 19:19:40.763687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.265 [2024-07-12 19:19:40.763849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.265 [2024-07-12 19:19:40.764012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.265 [2024-07-12 19:19:40.764021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.265 [2024-07-12 19:19:40.764028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.265 [2024-07-12 19:19:40.766777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 458091 Killed "${NVMF_APP[@]}" "$@" 00:27:38.265 19:19:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:38.265 19:19:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:38.265 19:19:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:38.265 19:19:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:38.265 19:19:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:38.265 19:19:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=459508 00:27:38.265 19:19:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 459508 00:27:38.265 19:19:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:38.265 19:19:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 459508 ']' 00:27:38.265 19:19:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.265 [2024-07-12 19:19:40.776490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.265 19:19:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:38.265 [2024-07-12 19:19:40.776922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.265 [2024-07-12 19:19:40.776940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.265 [2024-07-12 19:19:40.776947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.265 19:19:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.265 [2024-07-12 19:19:40.777124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.265 19:19:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:38.265 [2024-07-12 19:19:40.777308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.265 [2024-07-12 19:19:40.777319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.265 [2024-07-12 19:19:40.777326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.265 19:19:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:38.265 [2024-07-12 19:19:40.780160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.265 [2024-07-12 19:19:40.789555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.265 [2024-07-12 19:19:40.789980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.265 [2024-07-12 19:19:40.789998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.265 [2024-07-12 19:19:40.790005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.265 [2024-07-12 19:19:40.790182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.265 [2024-07-12 19:19:40.790366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.265 [2024-07-12 19:19:40.790378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.265 [2024-07-12 19:19:40.790384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.265 [2024-07-12 19:19:40.793220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.266 [2024-07-12 19:19:40.802604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.266 [2024-07-12 19:19:40.802956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.266 [2024-07-12 19:19:40.802972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.266 [2024-07-12 19:19:40.802979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.266 [2024-07-12 19:19:40.803157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.266 [2024-07-12 19:19:40.803342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.266 [2024-07-12 19:19:40.803352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.266 [2024-07-12 19:19:40.803359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.266 [2024-07-12 19:19:40.806196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.266 [2024-07-12 19:19:40.815731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.266 [2024-07-12 19:19:40.816093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.266 [2024-07-12 19:19:40.816109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.266 [2024-07-12 19:19:40.816116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.266 [2024-07-12 19:19:40.816294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.266 [2024-07-12 19:19:40.816467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.266 [2024-07-12 19:19:40.816476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.266 [2024-07-12 19:19:40.816482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.266 [2024-07-12 19:19:40.819283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.266 [2024-07-12 19:19:40.826375] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:27:38.266 [2024-07-12 19:19:40.826419] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.266 [2024-07-12 19:19:40.828778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.266 [2024-07-12 19:19:40.829181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.266 [2024-07-12 19:19:40.829199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.266 [2024-07-12 19:19:40.829206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.266 [2024-07-12 19:19:40.829391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.266 [2024-07-12 19:19:40.829580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.266 [2024-07-12 19:19:40.829591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.266 [2024-07-12 19:19:40.829598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.526 [2024-07-12 19:19:40.832541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.526 [2024-07-12 19:19:40.841879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.526 [2024-07-12 19:19:40.842246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.526 [2024-07-12 19:19:40.842264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.526 [2024-07-12 19:19:40.842272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.526 [2024-07-12 19:19:40.842447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.526 [2024-07-12 19:19:40.842621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.526 [2024-07-12 19:19:40.842631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.526 [2024-07-12 19:19:40.842637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.526 [2024-07-12 19:19:40.845470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.526 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.526 [2024-07-12 19:19:40.854962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.526 [2024-07-12 19:19:40.855365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.526 [2024-07-12 19:19:40.855386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.526 [2024-07-12 19:19:40.855394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.526 [2024-07-12 19:19:40.855567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.526 [2024-07-12 19:19:40.855742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.526 [2024-07-12 19:19:40.855752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.527 [2024-07-12 19:19:40.855758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.527 [2024-07-12 19:19:40.858512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.527 [2024-07-12 19:19:40.867938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.527 [2024-07-12 19:19:40.868361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.527 [2024-07-12 19:19:40.868378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.527 [2024-07-12 19:19:40.868386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.527 [2024-07-12 19:19:40.868559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.527 [2024-07-12 19:19:40.868731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.527 [2024-07-12 19:19:40.868740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.527 [2024-07-12 19:19:40.868747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.527 [2024-07-12 19:19:40.871494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.527 [2024-07-12 19:19:40.879817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:38.527 [2024-07-12 19:19:40.880917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.527 [2024-07-12 19:19:40.881343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.527 [2024-07-12 19:19:40.881361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.527 [2024-07-12 19:19:40.881368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.527 [2024-07-12 19:19:40.881542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.527 [2024-07-12 19:19:40.881716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.527 [2024-07-12 19:19:40.881726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.527 [2024-07-12 19:19:40.881732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.527 [2024-07-12 19:19:40.884477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.527 [2024-07-12 19:19:40.893898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.527 [2024-07-12 19:19:40.894318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.527 [2024-07-12 19:19:40.894336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.527 [2024-07-12 19:19:40.894343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.527 [2024-07-12 19:19:40.894521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.527 [2024-07-12 19:19:40.894696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.527 [2024-07-12 19:19:40.894706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.527 [2024-07-12 19:19:40.894712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.527 [2024-07-12 19:19:40.897547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.527 [2024-07-12 19:19:40.906974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.527 [2024-07-12 19:19:40.907384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.527 [2024-07-12 19:19:40.907401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.527 [2024-07-12 19:19:40.907409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.527 [2024-07-12 19:19:40.907581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.527 [2024-07-12 19:19:40.907757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.527 [2024-07-12 19:19:40.907767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.527 [2024-07-12 19:19:40.907775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.527 [2024-07-12 19:19:40.910539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.527 [2024-07-12 19:19:40.920114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.527 [2024-07-12 19:19:40.920562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.527 [2024-07-12 19:19:40.920582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.527 [2024-07-12 19:19:40.920592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.527 [2024-07-12 19:19:40.920768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.527 [2024-07-12 19:19:40.920943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.527 [2024-07-12 19:19:40.920953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.527 [2024-07-12 19:19:40.920961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.527 [2024-07-12 19:19:40.923773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.527 [2024-07-12 19:19:40.933085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.527 [2024-07-12 19:19:40.933511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.527 [2024-07-12 19:19:40.933528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.527 [2024-07-12 19:19:40.933536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.527 [2024-07-12 19:19:40.933708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.527 [2024-07-12 19:19:40.933882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.527 [2024-07-12 19:19:40.933891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.527 [2024-07-12 19:19:40.933904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.527 [2024-07-12 19:19:40.936656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.527 [2024-07-12 19:19:40.946174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.527 [2024-07-12 19:19:40.946576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.527 [2024-07-12 19:19:40.946593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.527 [2024-07-12 19:19:40.946601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.527 [2024-07-12 19:19:40.946774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.527 [2024-07-12 19:19:40.946949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.527 [2024-07-12 19:19:40.946958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.527 [2024-07-12 19:19:40.946964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.527 [2024-07-12 19:19:40.949773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.527 [2024-07-12 19:19:40.954698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.527 [2024-07-12 19:19:40.954724] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.527 [2024-07-12 19:19:40.954731] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.527 [2024-07-12 19:19:40.954737] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.527 [2024-07-12 19:19:40.954743] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.527 [2024-07-12 19:19:40.954795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.527 [2024-07-12 19:19:40.954905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.527 [2024-07-12 19:19:40.954906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:38.527 [2024-07-12 19:19:40.959282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.527 [2024-07-12 19:19:40.959646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.527 [2024-07-12 19:19:40.959665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.527 [2024-07-12 19:19:40.959674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.527 [2024-07-12 19:19:40.959853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.527 [2024-07-12 19:19:40.960033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.527 [2024-07-12 19:19:40.960043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.527 [2024-07-12 19:19:40.960050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.527 [2024-07-12 19:19:40.962897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.527 [2024-07-12 19:19:40.972461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.527 [2024-07-12 19:19:40.972909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.527 [2024-07-12 19:19:40.972929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.527 [2024-07-12 19:19:40.972937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.527 [2024-07-12 19:19:40.973124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.527 [2024-07-12 19:19:40.973309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.527 [2024-07-12 19:19:40.973320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.527 [2024-07-12 19:19:40.973327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.527 [2024-07-12 19:19:40.976163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.527 [2024-07-12 19:19:40.985538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.527 [2024-07-12 19:19:40.985993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.527 [2024-07-12 19:19:40.986013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.527 [2024-07-12 19:19:40.986022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.527 [2024-07-12 19:19:40.986202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.527 [2024-07-12 19:19:40.986390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.527 [2024-07-12 19:19:40.986400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.527 [2024-07-12 19:19:40.986408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.527 [2024-07-12 19:19:40.989244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.527 [2024-07-12 19:19:40.998633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.527 [2024-07-12 19:19:40.999084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.527 [2024-07-12 19:19:40.999105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.527 [2024-07-12 19:19:40.999113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.527 [2024-07-12 19:19:40.999298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.527 [2024-07-12 19:19:40.999477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.528 [2024-07-12 19:19:40.999487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.528 [2024-07-12 19:19:40.999494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.528 [2024-07-12 19:19:41.002332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.528 [2024-07-12 19:19:41.011714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.528 [2024-07-12 19:19:41.012167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.528 [2024-07-12 19:19:41.012187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.528 [2024-07-12 19:19:41.012196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.528 [2024-07-12 19:19:41.012380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.528 [2024-07-12 19:19:41.012559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.528 [2024-07-12 19:19:41.012569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.528 [2024-07-12 19:19:41.012576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.528 [2024-07-12 19:19:41.015416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.528 [2024-07-12 19:19:41.024779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.528 [2024-07-12 19:19:41.025210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.528 [2024-07-12 19:19:41.025233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.528 [2024-07-12 19:19:41.025241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.528 [2024-07-12 19:19:41.025419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.528 [2024-07-12 19:19:41.025598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.528 [2024-07-12 19:19:41.025608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.528 [2024-07-12 19:19:41.025615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.528 [2024-07-12 19:19:41.028445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.528 [2024-07-12 19:19:41.037985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.528 [2024-07-12 19:19:41.038412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.528 [2024-07-12 19:19:41.038429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.528 [2024-07-12 19:19:41.038437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.528 [2024-07-12 19:19:41.038616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.528 [2024-07-12 19:19:41.038793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.528 [2024-07-12 19:19:41.038803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.528 [2024-07-12 19:19:41.038809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.528 [2024-07-12 19:19:41.041645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.528 [2024-07-12 19:19:41.051181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.528 [2024-07-12 19:19:41.051610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.528 [2024-07-12 19:19:41.051628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.528 [2024-07-12 19:19:41.051635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.528 [2024-07-12 19:19:41.051813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.528 [2024-07-12 19:19:41.051991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.528 [2024-07-12 19:19:41.052001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.528 [2024-07-12 19:19:41.052007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.528 [2024-07-12 19:19:41.054841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.528 [2024-07-12 19:19:41.064379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.528 [2024-07-12 19:19:41.064807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.528 [2024-07-12 19:19:41.064827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.528 [2024-07-12 19:19:41.064834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.528 [2024-07-12 19:19:41.065012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.528 [2024-07-12 19:19:41.065191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.528 [2024-07-12 19:19:41.065201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.528 [2024-07-12 19:19:41.065207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.528 [2024-07-12 19:19:41.068046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.528 [2024-07-12 19:19:41.077566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.528 [2024-07-12 19:19:41.077930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.528 [2024-07-12 19:19:41.077947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.528 [2024-07-12 19:19:41.077956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.528 [2024-07-12 19:19:41.078133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.528 [2024-07-12 19:19:41.078315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.528 [2024-07-12 19:19:41.078326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.528 [2024-07-12 19:19:41.078332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.528 [2024-07-12 19:19:41.081160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.528 [2024-07-12 19:19:41.090688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.528 [2024-07-12 19:19:41.090978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.528 [2024-07-12 19:19:41.090995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.528 [2024-07-12 19:19:41.091002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.528 [2024-07-12 19:19:41.091180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.528 [2024-07-12 19:19:41.091364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.528 [2024-07-12 19:19:41.091374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.528 [2024-07-12 19:19:41.091380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.788 [2024-07-12 19:19:41.094213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.788 [2024-07-12 19:19:41.103754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.788 [2024-07-12 19:19:41.104101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.788 [2024-07-12 19:19:41.104118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.788 [2024-07-12 19:19:41.104126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.788 [2024-07-12 19:19:41.104310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.788 [2024-07-12 19:19:41.104492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.788 [2024-07-12 19:19:41.104502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.788 [2024-07-12 19:19:41.104509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.788 [2024-07-12 19:19:41.107336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.788 [2024-07-12 19:19:41.116875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.788 [2024-07-12 19:19:41.117309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.788 [2024-07-12 19:19:41.117326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.788 [2024-07-12 19:19:41.117334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.788 [2024-07-12 19:19:41.117512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.788 [2024-07-12 19:19:41.117690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.788 [2024-07-12 19:19:41.117699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.788 [2024-07-12 19:19:41.117705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.788 [2024-07-12 19:19:41.120541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.788 [2024-07-12 19:19:41.130079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.788 [2024-07-12 19:19:41.130494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.788 [2024-07-12 19:19:41.130511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.788 [2024-07-12 19:19:41.130519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.788 [2024-07-12 19:19:41.130696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.788 [2024-07-12 19:19:41.130876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.788 [2024-07-12 19:19:41.130886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.788 [2024-07-12 19:19:41.130893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.788 [2024-07-12 19:19:41.133727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.788 [2024-07-12 19:19:41.143266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.788 [2024-07-12 19:19:41.143674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.788 [2024-07-12 19:19:41.143691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.788 [2024-07-12 19:19:41.143698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.788 [2024-07-12 19:19:41.143875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.788 [2024-07-12 19:19:41.144054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.788 [2024-07-12 19:19:41.144063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.788 [2024-07-12 19:19:41.144070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.788 [2024-07-12 19:19:41.146902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.788 [2024-07-12 19:19:41.156442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.788 [2024-07-12 19:19:41.156810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.788 [2024-07-12 19:19:41.156827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.788 [2024-07-12 19:19:41.156835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.788 [2024-07-12 19:19:41.157013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.788 [2024-07-12 19:19:41.157190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.788 [2024-07-12 19:19:41.157200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.788 [2024-07-12 19:19:41.157206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.788 [2024-07-12 19:19:41.160036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.788 [2024-07-12 19:19:41.169559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.788 [2024-07-12 19:19:41.169952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.788 [2024-07-12 19:19:41.169969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.788 [2024-07-12 19:19:41.169976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.788 [2024-07-12 19:19:41.170154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.789 [2024-07-12 19:19:41.170337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.789 [2024-07-12 19:19:41.170347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.789 [2024-07-12 19:19:41.170354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.789 [2024-07-12 19:19:41.173181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.789 [2024-07-12 19:19:41.182715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.789 [2024-07-12 19:19:41.183073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.789 [2024-07-12 19:19:41.183090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.789 [2024-07-12 19:19:41.183097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.789 [2024-07-12 19:19:41.183279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.789 [2024-07-12 19:19:41.183458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.789 [2024-07-12 19:19:41.183468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.789 [2024-07-12 19:19:41.183475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.789 [2024-07-12 19:19:41.186313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.789 [2024-07-12 19:19:41.195829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.789 [2024-07-12 19:19:41.196190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.789 [2024-07-12 19:19:41.196207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.789 [2024-07-12 19:19:41.196217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.789 [2024-07-12 19:19:41.196400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.789 [2024-07-12 19:19:41.196578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.789 [2024-07-12 19:19:41.196588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.789 [2024-07-12 19:19:41.196595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.789 [2024-07-12 19:19:41.199427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.789 [2024-07-12 19:19:41.208968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.789 [2024-07-12 19:19:41.209323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.789 [2024-07-12 19:19:41.209340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.789 [2024-07-12 19:19:41.209347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.789 [2024-07-12 19:19:41.209524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.789 [2024-07-12 19:19:41.209704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.789 [2024-07-12 19:19:41.209713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.789 [2024-07-12 19:19:41.209719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.789 [2024-07-12 19:19:41.212549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.789 [2024-07-12 19:19:41.222082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.789 [2024-07-12 19:19:41.222514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.789 [2024-07-12 19:19:41.222531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.789 [2024-07-12 19:19:41.222539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.789 [2024-07-12 19:19:41.222716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.789 [2024-07-12 19:19:41.222894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.789 [2024-07-12 19:19:41.222903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.789 [2024-07-12 19:19:41.222910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.789 [2024-07-12 19:19:41.225739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.789 [2024-07-12 19:19:41.235273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.789 [2024-07-12 19:19:41.235704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.789 [2024-07-12 19:19:41.235721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.789 [2024-07-12 19:19:41.235728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.789 [2024-07-12 19:19:41.235905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.789 [2024-07-12 19:19:41.236084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.789 [2024-07-12 19:19:41.236094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.789 [2024-07-12 19:19:41.236104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.789 [2024-07-12 19:19:41.238935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.789 [2024-07-12 19:19:41.248457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.789 [2024-07-12 19:19:41.248888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.789 [2024-07-12 19:19:41.248905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.789 [2024-07-12 19:19:41.248912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.789 [2024-07-12 19:19:41.249089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.789 [2024-07-12 19:19:41.249272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.789 [2024-07-12 19:19:41.249282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.789 [2024-07-12 19:19:41.249289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.789 [2024-07-12 19:19:41.252122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.789 [2024-07-12 19:19:41.261643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.789 [2024-07-12 19:19:41.262072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.789 [2024-07-12 19:19:41.262088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.789 [2024-07-12 19:19:41.262095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.789 [2024-07-12 19:19:41.262277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.789 [2024-07-12 19:19:41.262456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.789 [2024-07-12 19:19:41.262466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.789 [2024-07-12 19:19:41.262473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.789 [2024-07-12 19:19:41.265307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.789 [2024-07-12 19:19:41.274832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.789 [2024-07-12 19:19:41.275160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.789 [2024-07-12 19:19:41.275176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.789 [2024-07-12 19:19:41.275184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.789 [2024-07-12 19:19:41.275365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.789 [2024-07-12 19:19:41.275543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.789 [2024-07-12 19:19:41.275552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.789 [2024-07-12 19:19:41.275558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.789 [2024-07-12 19:19:41.278389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.789 [2024-07-12 19:19:41.287925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.789 [2024-07-12 19:19:41.288344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.789 [2024-07-12 19:19:41.288361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.789 [2024-07-12 19:19:41.288369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.789 [2024-07-12 19:19:41.288547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.789 [2024-07-12 19:19:41.288726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.789 [2024-07-12 19:19:41.288735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.789 [2024-07-12 19:19:41.288741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.789 [2024-07-12 19:19:41.291577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.789 [2024-07-12 19:19:41.301098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.789 [2024-07-12 19:19:41.301525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.789 [2024-07-12 19:19:41.301543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.789 [2024-07-12 19:19:41.301550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.789 [2024-07-12 19:19:41.301728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.789 [2024-07-12 19:19:41.301906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.789 [2024-07-12 19:19:41.301915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.789 [2024-07-12 19:19:41.301921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.789 [2024-07-12 19:19:41.304753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.790 [2024-07-12 19:19:41.314296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.790 [2024-07-12 19:19:41.314722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.790 [2024-07-12 19:19:41.314738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.790 [2024-07-12 19:19:41.314746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.790 [2024-07-12 19:19:41.314922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.790 [2024-07-12 19:19:41.315101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.790 [2024-07-12 19:19:41.315111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.790 [2024-07-12 19:19:41.315117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.790 [2024-07-12 19:19:41.317952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.790 [2024-07-12 19:19:41.327484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.790 [2024-07-12 19:19:41.327912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.790 [2024-07-12 19:19:41.327929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.790 [2024-07-12 19:19:41.327937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.790 [2024-07-12 19:19:41.328117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.790 [2024-07-12 19:19:41.328302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.790 [2024-07-12 19:19:41.328312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.790 [2024-07-12 19:19:41.328319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.790 [2024-07-12 19:19:41.331145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.790 [2024-07-12 19:19:41.340673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.790 [2024-07-12 19:19:41.341049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.790 [2024-07-12 19:19:41.341066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.790 [2024-07-12 19:19:41.341073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:38.790 [2024-07-12 19:19:41.341254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:38.790 [2024-07-12 19:19:41.341433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.790 [2024-07-12 19:19:41.341442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.790 [2024-07-12 19:19:41.341448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.790 [2024-07-12 19:19:41.344280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.790 [2024-07-12 19:19:41.353819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.790 [2024-07-12 19:19:41.354182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.790 [2024-07-12 19:19:41.354199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:38.790 [2024-07-12 19:19:41.354207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.049 [2024-07-12 19:19:41.354390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.049 [2024-07-12 19:19:41.354570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.049 [2024-07-12 19:19:41.354582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.049 [2024-07-12 19:19:41.354589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.049 [2024-07-12 19:19:41.357426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.049 [2024-07-12 19:19:41.366961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.049 [2024-07-12 19:19:41.367362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.049 [2024-07-12 19:19:41.367380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.049 [2024-07-12 19:19:41.367387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.049 [2024-07-12 19:19:41.367566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.049 [2024-07-12 19:19:41.367745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.049 [2024-07-12 19:19:41.367755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.049 [2024-07-12 19:19:41.367765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.049 [2024-07-12 19:19:41.370598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.049 [2024-07-12 19:19:41.380123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.049 [2024-07-12 19:19:41.380484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.049 [2024-07-12 19:19:41.380501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.049 [2024-07-12 19:19:41.380509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.049 [2024-07-12 19:19:41.380685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.049 [2024-07-12 19:19:41.380863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.049 [2024-07-12 19:19:41.380872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.049 [2024-07-12 19:19:41.380879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.049 [2024-07-12 19:19:41.383712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.049 [2024-07-12 19:19:41.393250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.049 [2024-07-12 19:19:41.393671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.049 [2024-07-12 19:19:41.393688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.049 [2024-07-12 19:19:41.393696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.049 [2024-07-12 19:19:41.393873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.049 [2024-07-12 19:19:41.394051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.049 [2024-07-12 19:19:41.394061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.049 [2024-07-12 19:19:41.394069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.049 [2024-07-12 19:19:41.396900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.049 [2024-07-12 19:19:41.406427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.049 [2024-07-12 19:19:41.406808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-07-12 19:19:41.406825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.050 [2024-07-12 19:19:41.406832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.050 [2024-07-12 19:19:41.407010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.050 [2024-07-12 19:19:41.407188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.050 [2024-07-12 19:19:41.407198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.050 [2024-07-12 19:19:41.407205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.050 [2024-07-12 19:19:41.410049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.050 [2024-07-12 19:19:41.419640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.050 [2024-07-12 19:19:41.420077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-07-12 19:19:41.420098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.050 [2024-07-12 19:19:41.420107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.050 [2024-07-12 19:19:41.420290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.050 [2024-07-12 19:19:41.420469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.050 [2024-07-12 19:19:41.420479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.050 [2024-07-12 19:19:41.420486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.050 [2024-07-12 19:19:41.423325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.050 [2024-07-12 19:19:41.432691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.050 [2024-07-12 19:19:41.433064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-07-12 19:19:41.433083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.050 [2024-07-12 19:19:41.433090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.050 [2024-07-12 19:19:41.433273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.050 [2024-07-12 19:19:41.433451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.050 [2024-07-12 19:19:41.433461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.050 [2024-07-12 19:19:41.433468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.050 [2024-07-12 19:19:41.436311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.050 [2024-07-12 19:19:41.445840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.050 [2024-07-12 19:19:41.446177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-07-12 19:19:41.446195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.050 [2024-07-12 19:19:41.446203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.050 [2024-07-12 19:19:41.446387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.050 [2024-07-12 19:19:41.446565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.050 [2024-07-12 19:19:41.446575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.050 [2024-07-12 19:19:41.446581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.050 [2024-07-12 19:19:41.449420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.050 [2024-07-12 19:19:41.458972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.050 [2024-07-12 19:19:41.459380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-07-12 19:19:41.459399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.050 [2024-07-12 19:19:41.459407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.050 [2024-07-12 19:19:41.459584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.050 [2024-07-12 19:19:41.459766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.050 [2024-07-12 19:19:41.459777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.050 [2024-07-12 19:19:41.459783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.050 [2024-07-12 19:19:41.462619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.050 [2024-07-12 19:19:41.472171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.050 [2024-07-12 19:19:41.472545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-07-12 19:19:41.472564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.050 [2024-07-12 19:19:41.472572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.050 [2024-07-12 19:19:41.472750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.050 [2024-07-12 19:19:41.472930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.050 [2024-07-12 19:19:41.472939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.050 [2024-07-12 19:19:41.472945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.050 [2024-07-12 19:19:41.475781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.050 [2024-07-12 19:19:41.485326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.050 [2024-07-12 19:19:41.485761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-07-12 19:19:41.485778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.050 [2024-07-12 19:19:41.485786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.050 [2024-07-12 19:19:41.485964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.050 [2024-07-12 19:19:41.486144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.050 [2024-07-12 19:19:41.486154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.050 [2024-07-12 19:19:41.486160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.050 [2024-07-12 19:19:41.489004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.050 [2024-07-12 19:19:41.498418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.050 [2024-07-12 19:19:41.498792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-07-12 19:19:41.498809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.050 [2024-07-12 19:19:41.498817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.050 [2024-07-12 19:19:41.498994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.050 [2024-07-12 19:19:41.499172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.050 [2024-07-12 19:19:41.499181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.050 [2024-07-12 19:19:41.499187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.050 [2024-07-12 19:19:41.502029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.050 [2024-07-12 19:19:41.511596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.050 [2024-07-12 19:19:41.511962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-07-12 19:19:41.511979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.050 [2024-07-12 19:19:41.511988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.050 [2024-07-12 19:19:41.512166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.050 [2024-07-12 19:19:41.512350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.050 [2024-07-12 19:19:41.512360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.050 [2024-07-12 19:19:41.512367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.050 [2024-07-12 19:19:41.515200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.050 [2024-07-12 19:19:41.524747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.050 [2024-07-12 19:19:41.525034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-07-12 19:19:41.525051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.050 [2024-07-12 19:19:41.525058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.050 [2024-07-12 19:19:41.525241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.050 [2024-07-12 19:19:41.525419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.050 [2024-07-12 19:19:41.525429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.050 [2024-07-12 19:19:41.525435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.050 [2024-07-12 19:19:41.528272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.050 [2024-07-12 19:19:41.537811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.050 [2024-07-12 19:19:41.538219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.050 [2024-07-12 19:19:41.538242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.050 [2024-07-12 19:19:41.538250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.050 [2024-07-12 19:19:41.538426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.050 [2024-07-12 19:19:41.538605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.050 [2024-07-12 19:19:41.538615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.050 [2024-07-12 19:19:41.538621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.050 [2024-07-12 19:19:41.541454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.050 [2024-07-12 19:19:41.550986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.050 [2024-07-12 19:19:41.551397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-07-12 19:19:41.551415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.051 [2024-07-12 19:19:41.551427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.051 [2024-07-12 19:19:41.551606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.051 [2024-07-12 19:19:41.551785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.051 [2024-07-12 19:19:41.551796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.051 [2024-07-12 19:19:41.551802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.051 [2024-07-12 19:19:41.554646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.051 [2024-07-12 19:19:41.564190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.051 [2024-07-12 19:19:41.564607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-07-12 19:19:41.564625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.051 [2024-07-12 19:19:41.564632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.051 [2024-07-12 19:19:41.564809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.051 [2024-07-12 19:19:41.564989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.051 [2024-07-12 19:19:41.564998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.051 [2024-07-12 19:19:41.565005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.051 [2024-07-12 19:19:41.567844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.051 [2024-07-12 19:19:41.577388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.051 [2024-07-12 19:19:41.577798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-07-12 19:19:41.577815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.051 [2024-07-12 19:19:41.577823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.051 [2024-07-12 19:19:41.578000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.051 [2024-07-12 19:19:41.578180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.051 [2024-07-12 19:19:41.578190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.051 [2024-07-12 19:19:41.578197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.051 [2024-07-12 19:19:41.581037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.051 [2024-07-12 19:19:41.590578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.051 [2024-07-12 19:19:41.590923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-07-12 19:19:41.590940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.051 [2024-07-12 19:19:41.590948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.051 [2024-07-12 19:19:41.591125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.051 [2024-07-12 19:19:41.591311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.051 [2024-07-12 19:19:41.591325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.051 [2024-07-12 19:19:41.591332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.051 [2024-07-12 19:19:41.594169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.051 [2024-07-12 19:19:41.603712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.051 [2024-07-12 19:19:41.604095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.051 [2024-07-12 19:19:41.604112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.051 [2024-07-12 19:19:41.604119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.051 [2024-07-12 19:19:41.604301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.051 [2024-07-12 19:19:41.604481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.051 [2024-07-12 19:19:41.604490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.051 [2024-07-12 19:19:41.604497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.051 [2024-07-12 19:19:41.607335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.310 [2024-07-12 19:19:41.616883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.310 [2024-07-12 19:19:41.617187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.310 [2024-07-12 19:19:41.617205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.310 [2024-07-12 19:19:41.617212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.310 [2024-07-12 19:19:41.617396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.310 [2024-07-12 19:19:41.617575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.310 [2024-07-12 19:19:41.617584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.310 [2024-07-12 19:19:41.617591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.310 [2024-07-12 19:19:41.620431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.310 [2024-07-12 19:19:41.629972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.310 [2024-07-12 19:19:41.630264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.310 [2024-07-12 19:19:41.630282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.310 [2024-07-12 19:19:41.630290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.310 [2024-07-12 19:19:41.630467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.310 [2024-07-12 19:19:41.630645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.310 [2024-07-12 19:19:41.630655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.310 [2024-07-12 19:19:41.630661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.310 [2024-07-12 19:19:41.633500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.310 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:39.310 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:39.310 19:19:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:39.310 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:39.310 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.310 [2024-07-12 19:19:41.643044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.310 [2024-07-12 19:19:41.643386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.311 [2024-07-12 19:19:41.643403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.311 [2024-07-12 19:19:41.643412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.311 [2024-07-12 19:19:41.643590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.311 [2024-07-12 19:19:41.643770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.311 [2024-07-12 19:19:41.643780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.311 [2024-07-12 19:19:41.643787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.311 [2024-07-12 19:19:41.646629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.311 [2024-07-12 19:19:41.656166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.311 [2024-07-12 19:19:41.656497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.311 [2024-07-12 19:19:41.656515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.311 [2024-07-12 19:19:41.656523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.311 [2024-07-12 19:19:41.656702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.311 [2024-07-12 19:19:41.656882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.311 [2024-07-12 19:19:41.656892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.311 [2024-07-12 19:19:41.656900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.311 [2024-07-12 19:19:41.659743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.311 [2024-07-12 19:19:41.669291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.311 [2024-07-12 19:19:41.669658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.311 [2024-07-12 19:19:41.669674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.311 [2024-07-12 19:19:41.669682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.311 [2024-07-12 19:19:41.669859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.311 [2024-07-12 19:19:41.670037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.311 [2024-07-12 19:19:41.670046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.311 [2024-07-12 19:19:41.670053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.311 [2024-07-12 19:19:41.672893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.311 [2024-07-12 19:19:41.679653] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.311 [2024-07-12 19:19:41.682441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.311 [2024-07-12 19:19:41.682728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.311 [2024-07-12 19:19:41.682745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.311 [2024-07-12 19:19:41.682753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.311 [2024-07-12 19:19:41.682930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.311 [2024-07-12 19:19:41.683110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.311 [2024-07-12 19:19:41.683121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.311 [2024-07-12 19:19:41.683128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.311 [2024-07-12 19:19:41.685983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.311 [2024-07-12 19:19:41.695564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.311 [2024-07-12 19:19:41.695863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.311 [2024-07-12 19:19:41.695881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.311 [2024-07-12 19:19:41.695888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.311 [2024-07-12 19:19:41.696067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.311 [2024-07-12 19:19:41.696250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.311 [2024-07-12 19:19:41.696267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.311 [2024-07-12 19:19:41.696275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.311 [2024-07-12 19:19:41.699114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.311 [2024-07-12 19:19:41.708673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.311 [2024-07-12 19:19:41.709097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.311 [2024-07-12 19:19:41.709117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.311 [2024-07-12 19:19:41.709126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.311 [2024-07-12 19:19:41.709309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.311 [2024-07-12 19:19:41.709489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.311 [2024-07-12 19:19:41.709500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.311 [2024-07-12 19:19:41.709515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.311 [2024-07-12 19:19:41.712351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.311 Malloc0 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.311 [2024-07-12 19:19:41.721886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.311 [2024-07-12 19:19:41.722232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.311 [2024-07-12 19:19:41.722251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.311 [2024-07-12 19:19:41.722259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.311 [2024-07-12 19:19:41.722436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.311 [2024-07-12 19:19:41.722615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.311 [2024-07-12 19:19:41.722625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.311 [2024-07-12 19:19:41.722632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.311 [2024-07-12 19:19:41.725474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.311 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.312 19:19:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.312 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.312 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.312 [2024-07-12 19:19:41.735026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.312 [2024-07-12 19:19:41.735367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.312 [2024-07-12 19:19:41.735385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf1980 with addr=10.0.0.2, port=4420 00:27:39.312 [2024-07-12 19:19:41.735393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1980 is same with the state(5) to be set 00:27:39.312 [2024-07-12 19:19:41.735569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf1980 (9): Bad file descriptor 00:27:39.312 [2024-07-12 19:19:41.735749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.312 [2024-07-12 19:19:41.735758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.312 [2024-07-12 19:19:41.735765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.312 [2024-07-12 19:19:41.736270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.312 [2024-07-12 19:19:41.738604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.312 19:19:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.312 19:19:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 458574 00:27:39.312 [2024-07-12 19:19:41.748155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.571 [2024-07-12 19:19:41.939811] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:49.547 00:27:49.547 Latency(us) 00:27:49.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.547 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:49.547 Verification LBA range: start 0x0 length 0x4000 00:27:49.547 Nvme1n1 : 15.00 8114.56 31.70 13136.03 0.00 6003.50 443.44 19831.76 00:27:49.547 =================================================================================================================== 00:27:49.547 Total : 8114.56 31.70 13136.03 0.00 6003.50 443.44 19831.76 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:49.547 rmmod nvme_tcp 00:27:49.547 rmmod nvme_fabrics 00:27:49.547 rmmod nvme_keyring 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 459508 ']' 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 459508 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 459508 ']' 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 459508 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 459508 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 459508' 00:27:49.547 killing process with pid 459508 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 459508 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 459508 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.547 19:19:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.506 19:19:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:50.506 00:27:50.506 real 0m26.558s 00:27:50.506 user 1m3.490s 00:27:50.506 sys 0m6.400s 00:27:50.506 19:19:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:50.506 19:19:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:50.506 ************************************ 00:27:50.506 END TEST nvmf_bdevperf 00:27:50.506 ************************************ 00:27:50.506 19:19:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:50.506 19:19:52 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:50.506 19:19:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:50.506 19:19:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:50.506 19:19:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.506 ************************************ 00:27:50.506 START TEST nvmf_target_disconnect 00:27:50.506 ************************************ 00:27:50.506 19:19:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:50.506 * Looking for test storage... 00:27:50.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.506 19:19:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.765 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:50.765 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:50.765 19:19:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:50.765 19:19:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:56.042 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:56.042 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:56.042 Found net devices under 0000:86:00.0: cvl_0_0 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:56.042 Found net devices under 0000:86:00.1: cvl_0_1 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.042 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:56.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:27:56.302 00:27:56.302 --- 10.0.0.2 ping statistics --- 00:27:56.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.302 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:56.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:27:56.302 00:27:56.302 --- 10.0.0.1 ping statistics --- 00:27:56.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.302 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:56.302 ************************************ 00:27:56.302 START TEST nvmf_target_disconnect_tc1 00:27:56.302 ************************************ 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:56.302 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:56.562 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.562 [2024-07-12 19:19:58.941594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.562 [2024-07-12 19:19:58.941632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f8e60 with addr=10.0.0.2, port=4420 00:27:56.562 [2024-07-12 19:19:58.941650] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:56.562 [2024-07-12 19:19:58.941660] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:56.562 [2024-07-12 19:19:58.941667] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:56.562 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:56.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:56.562 Initializing NVMe Controllers 00:27:56.562 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:56.562 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:56.562 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:56.562 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:56.562 00:27:56.562 real 0m0.108s 00:27:56.562 user 0m0.043s 00:27:56.562 sys 0m0.065s 00:27:56.562 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:56.562 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:56.562 ************************************ 00:27:56.562 END TEST nvmf_target_disconnect_tc1 00:27:56.562 ************************************ 00:27:56.562 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:56.562 19:19:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:56.562 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:56.562 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:56.562 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:56.562 ************************************ 00:27:56.562 START TEST nvmf_target_disconnect_tc2 00:27:56.562 ************************************ 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=464601 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 464601 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 464601 ']' 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:56.562 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.562 [2024-07-12 19:19:59.077665] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:27:56.562 [2024-07-12 19:19:59.077706] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.562 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.821 [2024-07-12 19:19:59.145707] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:56.821 [2024-07-12 19:19:59.224177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.821 [2024-07-12 19:19:59.224212] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.822 [2024-07-12 19:19:59.224218] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.822 [2024-07-12 19:19:59.224230] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.822 [2024-07-12 19:19:59.224235] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.822 [2024-07-12 19:19:59.224348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:56.822 [2024-07-12 19:19:59.224452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:56.822 [2024-07-12 19:19:59.224566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:56.822 [2024-07-12 19:19:59.224568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.390 Malloc0 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.390 [2024-07-12 19:19:59.943167] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.390 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.650 [2024-07-12 19:19:59.972197] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=464704 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:57.650 19:19:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:57.650 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.577 19:20:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 464601 00:27:59.577 19:20:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Write completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Write completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Write completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Write completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Write completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Write completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Write completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Write completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 [2024-07-12 19:20:01.999442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Read completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.577 Write completed with error (sct=0, sc=8) 00:27:59.577 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 [2024-07-12 19:20:01.999649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 [2024-07-12 19:20:01.999838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Write completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 Read completed with error (sct=0, sc=8) 00:27:59.578 starting I/O failed 00:27:59.578 [2024-07-12 19:20:02.000031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:59.578 [2024-07-12 19:20:02.000302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.578 [2024-07-12 19:20:02.000351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.578 qpair failed and we were unable to recover it. 00:27:59.578 [2024-07-12 19:20:02.000461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.578 [2024-07-12 19:20:02.000471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.578 qpair failed and we were unable to recover it. 00:27:59.578 [2024-07-12 19:20:02.000575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.578 [2024-07-12 19:20:02.000585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.578 qpair failed and we were unable to recover it. 00:27:59.578 [2024-07-12 19:20:02.000675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.578 [2024-07-12 19:20:02.000684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.578 qpair failed and we were unable to recover it. 00:27:59.578 [2024-07-12 19:20:02.000887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.578 [2024-07-12 19:20:02.000898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.578 qpair failed and we were unable to recover it. 00:27:59.578 [2024-07-12 19:20:02.001096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.578 [2024-07-12 19:20:02.001106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.578 qpair failed and we were unable to recover it. 00:27:59.578 [2024-07-12 19:20:02.001406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.578 [2024-07-12 19:20:02.001416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.578 qpair failed and we were unable to recover it. 00:27:59.578 [2024-07-12 19:20:02.001510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.578 [2024-07-12 19:20:02.001520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.578 qpair failed and we were unable to recover it. 00:27:59.578 [2024-07-12 19:20:02.001661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.578 [2024-07-12 19:20:02.001671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.578 qpair failed and we were unable to recover it. 00:27:59.578 [2024-07-12 19:20:02.001752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.578 [2024-07-12 19:20:02.001761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.578 qpair failed and we were unable to recover it. 00:27:59.578 [2024-07-12 19:20:02.001866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.578 [2024-07-12 19:20:02.001875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.578 qpair failed and we were unable to recover it. 00:27:59.578 [2024-07-12 19:20:02.001961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.578 [2024-07-12 19:20:02.001970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.578 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.002117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.002126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.002215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.002228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.002319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.002328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.002423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.002432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.002512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.002521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.002652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.002662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.002744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.002753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.002883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.002892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.003053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.003064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.003285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.003295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.003453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.003463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.003590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.003600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.003798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.003808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.003887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.003896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.004038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.004049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.004194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.004203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.004382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.004392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.004557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.004566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.004636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.004646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.004792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.004802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.005043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.005053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.005185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.005194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.005429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.005459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.005578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.005608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.005811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.005841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.006103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.006133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.006278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.006308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.006489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.006518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.006717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.006728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.006906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.006916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.007060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.007084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.007323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.007352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.007486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.007516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.007639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.007669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.007932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.007961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.008149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.008158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.008379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.008389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.008606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.008616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.008755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.008765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.009019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.009049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.579 qpair failed and we were unable to recover it. 00:27:59.579 [2024-07-12 19:20:02.009245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.579 [2024-07-12 19:20:02.009275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.009417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.009451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.009709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.009740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.009856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.009885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.010069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.010098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.010294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.010324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.010509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.010539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.010800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.010830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.011139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.011168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.011452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.011483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.011764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.011794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.012052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.012081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.012270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.012300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.012450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.012480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.012601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.012630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.012923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.012992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.013209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.013256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.013494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.013525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.013706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.013735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.014031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.014061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.014301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.014332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.014562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.014592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.014772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.014801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.015056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.015085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.015273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.015304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.015437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.015467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.015597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.015626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.015804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.015833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.015962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.016000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.016185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.016215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.016447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.016477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.016659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.016688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.017019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.017048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.017278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.017308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.017497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.017526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.017770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.017800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.018004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.018034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.018221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.018261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.018447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.018477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.580 [2024-07-12 19:20:02.018768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.580 [2024-07-12 19:20:02.018798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.580 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.018918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.018946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.019203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.019243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.019443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.019473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.019584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.019613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.019799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.019829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.020007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.020037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.020240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.020270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.020480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.020511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.020717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.020746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.020942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.020972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.021210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.021250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.021369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.021398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.021533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.021562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.021755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.021785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.021987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.022016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.022253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.022285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.022473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.022503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.022671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.022701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.022974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.023003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.023218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.023260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.023521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.023551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.023740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.023770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.024040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.024070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.024311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.024341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.024576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.024605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.024789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.024818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.025064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.025093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.025275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.025305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.025489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.025529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.025708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.025737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.025927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.025956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.026189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.026219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.026430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.026460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.026645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.026674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.026937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.026965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.027248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.027279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.027423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.027452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.581 qpair failed and we were unable to recover it. 00:27:59.581 [2024-07-12 19:20:02.027621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.581 [2024-07-12 19:20:02.027651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.027833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.027862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.027964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.027993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.028173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.028202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.028427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.028457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.028642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.028672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.028911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.028940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.029109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.029138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.029314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.029345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.029533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.029562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.029800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.029830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.030030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.030060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.030274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.030304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.030503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.030533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.030714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.030744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.030946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.030976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.031117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.031146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.031407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.031438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.031730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.031760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.031961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.031990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.032273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.032304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.032507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.032537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.032801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.032831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.033120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.033150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.033346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.033376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.033638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.033668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.033799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.033828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.033957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.033986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.034151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.034181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.034379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.582 [2024-07-12 19:20:02.034409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.582 qpair failed and we were unable to recover it. 00:27:59.582 [2024-07-12 19:20:02.034624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.034653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.034789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.034819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.035061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.035091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.035281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.035311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.035493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.035524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.035691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.035721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.036012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.036041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.036285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.036316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.036448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.036477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.036597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.036627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.036814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.036845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.037101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.037130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.037397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.037428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.037558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.037587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.037729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.037758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.037879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.037909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.038097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.038127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.038297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.038327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.038463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.038493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.038706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.038736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.038963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.038993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.039187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.039217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.039363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.039394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.039588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.039617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.039753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.039782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.039983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.040012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.040219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.040259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.040523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.040553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.040757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.040792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.040971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.041000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.041245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.041276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.041450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.041480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.041661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.041690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.041883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.041912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.042104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.042134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.042401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.042432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.042601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.042630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.042840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.042870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.583 [2024-07-12 19:20:02.043131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.583 [2024-07-12 19:20:02.043161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.583 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.043342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.043372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.043604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.043634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.043821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.043851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.044089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.044119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.044356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.044387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.044650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.044680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.044902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.044932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.045069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.045098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.045299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.045329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.045511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.045540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.045729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.045758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.045971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.046001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.046207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.046247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.046451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.046481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.046679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.046708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.046829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.046858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.047150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.047181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.047378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.047409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.047649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.047678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.047921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.047950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.048137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.048166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.048369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.048400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.048611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.048641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.048906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.048936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.049185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.049215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.049393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.049423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.049630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.049659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.049869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.049899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.050187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.050217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.050409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.050444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.050591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.050621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.050861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.050891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.051147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.051179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.051360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.051391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.051625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.051654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.584 qpair failed and we were unable to recover it. 00:27:59.584 [2024-07-12 19:20:02.051775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.584 [2024-07-12 19:20:02.051804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.051970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.052000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.052267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.052299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.052418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.052448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.052563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.052592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.052730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.052759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.052877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.052906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.053099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.053129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.053315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.053346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.053532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.053562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.053751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.053781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.053972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.054001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.054215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.054252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.054491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.054520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.054713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.054743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.055034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.055063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.055251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.055280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.055423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.055452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.055665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.055695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.055918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.055947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.056241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.056272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.056412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.056443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.056635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.056664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.056860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.056889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.056997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.057026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.057222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.057260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.057500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.057529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.057723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.057752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.058029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.058057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.058291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.058322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.058443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.058471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.058677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.058706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.058909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.058939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.059074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.059103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.059305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.059341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.059465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.059494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.059608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.059637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.059873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.059902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.060153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.060182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.060437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.060468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.060658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.585 [2024-07-12 19:20:02.060688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.585 qpair failed and we were unable to recover it. 00:27:59.585 [2024-07-12 19:20:02.060822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.060851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.061119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.061148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.061277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.061307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.061541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.061571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.061780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.061809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.061992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.062021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.062283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.062314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.062503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.062533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.062712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.062741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.063048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.063078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.063302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.063333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.063601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.063630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.063884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.063913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.064115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.064144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.064324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.064355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.064544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.064573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.064745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.064774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.064979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.065009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.065182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.065211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.065462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.065492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.065701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.065731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.065966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.065995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.066190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.066218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.066490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.066520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.066709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.066739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.066977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.067007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.067295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.067325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.067520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.067549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.067726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.067755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.586 [2024-07-12 19:20:02.067937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.586 [2024-07-12 19:20:02.067966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.586 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.068260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.068292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.068477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.068507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.068774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.068803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.068983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.069017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.069295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.069326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.069536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.069565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.069732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.069761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.070040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.070068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.070303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.070333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.070592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.070621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.070879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.070908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.071207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.071244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.071509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.071539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.071807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.071836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.072025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.072054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.072240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.072270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.072440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.072469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.072669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.072698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.072907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.072936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.073196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.073236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.073519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.073548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.073737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.073766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.073937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.073966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.074233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.074264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.074432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.074461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.074658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.074687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.074938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.074967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.075202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.075240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.075483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.075513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.075756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.075785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.076008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.076038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.076173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.076202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.076507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.076537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.076805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.076834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.077121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.077150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.077352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.077383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.077652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.077680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.077868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.077897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.078082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.078111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.078367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.587 [2024-07-12 19:20:02.078397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.587 qpair failed and we were unable to recover it. 00:27:59.587 [2024-07-12 19:20:02.078646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.078676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.078930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.078960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.079141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.079170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.079373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.079408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.079579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.079609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.079804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.079833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.080109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.080138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.080400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.080431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.080576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.080606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.080792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.080822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.081005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.081034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.081322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.081353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.081559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.081588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.081836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.081865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.081971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.082000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.082172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.082200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.082466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.082496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.082766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.082796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.082936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.082965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.083174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.083203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.083503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.083533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.083657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.083686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.083899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.083927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.084137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.084165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.084433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.084463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.084651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.084681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.084889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.084918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.085104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.085134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.085404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.085434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.085572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.085601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.085823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.085853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.086144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.086173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.086422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.086453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.086643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.086672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.086933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.086962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.588 [2024-07-12 19:20:02.087255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.588 [2024-07-12 19:20:02.087286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.588 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.087405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.087435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.087616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.087645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.087886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.087915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.088113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.088143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.088412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.088443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.088576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.088605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.088819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.088848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.089112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.089147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.089399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.089430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.089694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.089722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.089989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.090019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.090218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.090257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.090507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.090535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.090725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.090754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.091024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.091054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.091335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.091365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.091588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.091617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.091888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.091917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.092207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.092244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.092421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.092451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.092621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.092650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.092848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.092877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.093142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.093171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.093470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.093500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.093774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.093803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.094031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.094061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.094248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.094278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.094530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.094560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.094802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.094831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.095124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.095153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.095347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.095378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.095611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.095640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.095883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.095912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.096096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.096125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.096414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.096446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.096770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.096800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.097073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.097103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.097288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.097318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.097504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.097533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.097796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.589 [2024-07-12 19:20:02.097825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.589 qpair failed and we were unable to recover it. 00:27:59.589 [2024-07-12 19:20:02.098068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.098096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.098364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.098395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.098687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.098717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.098930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.098960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.099169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.099198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.099379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.099410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.099677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.099706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.099991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.100025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.100303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.100333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.100608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.100637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.100903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.100932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.101236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.101266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.101457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.101486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.101623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.101652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.101916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.101945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.102187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.102216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.102498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.102528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.102817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.102846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.103106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.103135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.103331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.103363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.103634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.103664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.103870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.103899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.104191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.104220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.104422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.104452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.104645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.104674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.104913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.104942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.105114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.105143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.105386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.105417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.105675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.105703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.105878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.105907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.106051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.106079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.106341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.106371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.106645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.106674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.106793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.106821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.107070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.107100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.107391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.107421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.107721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.107750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.107998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.108027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.108300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.108330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.108523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.590 [2024-07-12 19:20:02.108552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.590 qpair failed and we were unable to recover it. 00:27:59.590 [2024-07-12 19:20:02.108821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.108851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.109044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.109073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.109338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.109369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.109567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.109596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.109785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.109814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.110084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.110113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.110303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.110333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.110529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.110564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.110783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.110812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.110994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.111022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.111305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.111336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.111614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.111644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.111918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.111947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.112160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.112189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.112481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.112512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.112793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.112822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.113077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.113107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.113239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.113270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.113543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.113573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.113702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.113731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.114009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.114037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.114324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.114355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.114461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.114490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.114764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.591 [2024-07-12 19:20:02.114793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.591 qpair failed and we were unable to recover it. 00:27:59.591 [2024-07-12 19:20:02.115038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.115067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.115281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.115312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.115582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.115610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.115860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.115889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.116060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.116089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.116382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.116412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.116677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.116706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.116920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.116950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.117199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.117250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.117544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.117574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.117836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.117865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.118065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.118094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.118287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.118318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.118585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.118615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.118906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.118936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.119212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.119252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.119500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.119530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.119777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.119806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.119990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.120018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.120214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.120271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.120451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.120479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.120754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.120783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.120955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.120984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.121256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.121292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.121623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.121653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.121889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.121919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.122163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.122192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.122461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.122492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.122787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.122816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.122989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.123019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.123211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.123250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.123532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.123562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.123754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.123784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.124051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.124080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.124352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.124383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.124649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.124679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.124951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.124981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.125185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.125214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.125497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.125528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.125787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.592 [2024-07-12 19:20:02.125816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.592 qpair failed and we were unable to recover it. 00:27:59.592 [2024-07-12 19:20:02.126075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.126104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.126402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.126433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.126710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.126739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.126948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.126977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.127244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.127274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.127464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.127493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.127761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.127790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.127987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.128016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.128284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.128314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.128581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.128611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.128912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.128943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.129243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.129273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.129543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.129573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.129871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.129900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.130177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.130206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.130426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.130457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.130732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.130761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.131058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.131087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.131339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.131389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.131591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.131620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.131817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.131847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.132120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.132149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.132293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.132324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.132596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.132631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.132880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.132909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.133180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.133209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.133514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.133545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.133812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.133842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.134096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.134126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.134417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.134448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.134660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.134689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.134883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.134914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.135130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.135160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.135470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.135501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.135703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.135733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.136013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.136042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.136300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.136331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.136486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.136516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.593 qpair failed and we were unable to recover it. 00:27:59.593 [2024-07-12 19:20:02.136786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.593 [2024-07-12 19:20:02.136816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.594 qpair failed and we were unable to recover it. 00:27:59.594 [2024-07-12 19:20:02.137141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.594 [2024-07-12 19:20:02.137170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.594 qpair failed and we were unable to recover it. 00:27:59.594 [2024-07-12 19:20:02.137470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.594 [2024-07-12 19:20:02.137501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.594 qpair failed and we were unable to recover it. 00:27:59.594 [2024-07-12 19:20:02.137691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.594 [2024-07-12 19:20:02.137720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.594 qpair failed and we were unable to recover it. 00:27:59.594 [2024-07-12 19:20:02.137922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.594 [2024-07-12 19:20:02.137952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.594 qpair failed and we were unable to recover it. 00:27:59.594 [2024-07-12 19:20:02.138241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.594 [2024-07-12 19:20:02.138272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.594 qpair failed and we were unable to recover it. 00:27:59.594 [2024-07-12 19:20:02.138568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.594 [2024-07-12 19:20:02.138598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.594 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.138894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.138927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.139081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.139110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.139381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.139413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.139710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.139739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.139943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.139972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.140253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.140284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.140586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.140616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.140886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.140915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.141209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.141249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.141545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.141575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.141763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.141793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.141917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.141947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.142211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.142249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.142474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.142503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.142700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.142730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.142990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.143019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.143305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.143336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.143625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.143655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.143858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.143892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.144153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.144183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.144486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.871 [2024-07-12 19:20:02.144516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.871 qpair failed and we were unable to recover it. 00:27:59.871 [2024-07-12 19:20:02.144791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.144820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.145117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.145147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.145424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.145455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.145754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.145783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.146059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.146088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.146287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.146318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.146519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.146548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.146727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.146756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.146974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.147004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.147199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.147238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.147444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.147473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.147746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.147775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.147971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.148000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.148249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.148280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.148492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.148522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.148795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.148824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.149124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.149154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.149344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.149374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.149649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.149678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.149901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.149930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.150183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.150212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.150479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.150508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.150717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.150747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.151016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.151046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.151277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.151308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.151450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.151480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.151695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.151724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.151998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.152027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.152274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.152304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.152576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.152606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.152797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.152826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.153093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.153122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.153325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.153356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.153556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.153585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.153834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.153863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.154067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.154096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.154375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.154406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.154621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.154656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.154856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.154885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.155141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.872 [2024-07-12 19:20:02.155170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.872 qpair failed and we were unable to recover it. 00:27:59.872 [2024-07-12 19:20:02.155428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.155458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.155654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.155684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.155895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.155924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.156190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.156219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.156412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.156442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.156714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.156744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.157038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.157067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.157274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.157304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.157499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.157528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.157779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.157808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.158008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.158036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.158265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.158297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.158495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.158526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.158779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.158807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.159008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.159037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.159142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.159171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.159423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.159453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.159700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.159729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.159994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.160023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.160317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.160348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.160626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.160656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.160901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.160930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.161201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.161240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.161386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.161416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.161669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.161699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.161967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.161996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.162214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.162258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.162470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.162499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.162676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.162705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.162977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.163006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.163254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.163284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.163558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.163587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.163862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.163891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.164102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.164131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.164379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.164411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.164682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.164710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.164848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.164877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.165148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.165177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.165411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.165442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.165689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.165718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.873 [2024-07-12 19:20:02.165904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.873 [2024-07-12 19:20:02.165933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.873 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.166147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.166176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.166342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.166373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.166561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.166590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.166707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.166736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.167007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.167036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.167288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.167319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.167595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.167624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.167821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.167850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.168117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.168146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.168351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.168382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.168661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.168691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.168964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.168993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.169189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.169218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.169436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.169466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.169711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.169741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.169919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.169948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.170084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.170113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.170394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.170425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.170612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.170641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.170836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.170865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.170996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.171025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.171234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.171265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.171514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.171543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.171735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.171769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.172047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.172076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.172357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.172388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.172676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.172706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.172991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.173021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.173214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.173253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.173470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.173500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.173772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.173801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.174100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.174129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.174320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.174351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.174546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.174575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.174848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.174877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.175070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.175100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.175347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.175378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.175606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.175636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.175831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.175861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.874 qpair failed and we were unable to recover it. 00:27:59.874 [2024-07-12 19:20:02.176117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.874 [2024-07-12 19:20:02.176147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.176371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.176401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.176591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.176620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.176812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.176841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.177119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.177148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.177434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.177465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.177754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.177783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.178069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.178098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.178388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.178418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.178692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.178721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.179020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.179049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.179324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.179355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.179532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.179561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.179787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.179816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.180086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.180115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.180413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.180445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.180622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.180650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.180973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.181003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.181206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.181244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.181542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.181572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.181842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.181872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.182072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.182102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.182282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.182313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.182590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.182619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.182894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.182930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.183052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.183082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.183352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.183383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.183599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.183628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.183904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.183936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.184190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.184221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.184436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.184468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.184737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.184766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.184991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.875 [2024-07-12 19:20:02.185021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.875 qpair failed and we were unable to recover it. 00:27:59.875 [2024-07-12 19:20:02.185241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.185271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.185519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.185548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.185817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.185846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.186059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.186088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.186358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.186391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.186710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.186741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.186939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.186969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.187162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.187192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.187384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.187415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.187601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.187630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.187836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.187866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.188064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.188093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.188374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.188405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.188677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.188706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.189003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.189031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.189249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.189279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.189458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.189488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.189674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.189703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.189835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.189867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.190139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.190169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.190378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.190410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.190660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.190689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.190961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.190991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.191118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.191148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.191346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.191378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.191571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.191601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.191803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.191832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.191979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.192008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.192188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.192217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.192484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.192514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.192790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.192821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.193118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.193153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.193421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.193452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.193666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.193695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.193909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.193939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.194075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.194104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.194303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.194334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.194562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.194592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.194725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.194755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.194966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.876 [2024-07-12 19:20:02.194999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.876 qpair failed and we were unable to recover it. 00:27:59.876 [2024-07-12 19:20:02.195277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.195312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.195514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.195543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.195769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.195799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.195999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.196028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.196282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.196313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.196576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.196608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.196728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.196759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.197024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.197053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.197178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.197208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.197493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.197526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.197801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.197831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.198061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.198091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.198361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.198393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.198663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.198695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.198920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.198949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.199090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.199119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.199336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.199367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.199586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.199615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.199753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.199783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.199976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.200006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.200208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.200247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.200395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.200424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.200607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.200637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.200832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.200862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.201112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.201142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.201330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.201362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.201615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.201648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.201875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.201905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.202026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.202055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.202248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.202280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.202411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.202442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.202657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.202692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.202966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.202996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.203192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.203239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.203442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.203471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.203675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.203705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.203918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.203948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.204221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.204264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.204518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.204548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.204823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.204853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.205058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.877 [2024-07-12 19:20:02.205088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.877 qpair failed and we were unable to recover it. 00:27:59.877 [2024-07-12 19:20:02.205284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.205318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.205517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.205547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.205725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.205755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.206008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.206038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.206262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.206293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.206484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.206514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.206675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.206706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.206931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.206960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.207166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.207196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.207438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.207469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.207647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.207677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.207876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.207906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.208041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.208071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.208318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.208349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.208545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.208575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.208702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.208732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.208854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.208884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.209083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.209113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.209248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.209280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.209556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.209587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.209916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.209946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.210160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.210191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.210437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.210471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.210612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.210644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.210918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.210948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.211222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.211266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.211487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.211517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.211713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.211743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.211932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.211962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.212088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.212119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.212250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.212287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.212434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.212464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.212598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.212628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.212824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.212854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.213047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.213076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.213189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.213219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.213347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.213376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.213494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.213524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.213773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.213803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.213980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.214010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.878 [2024-07-12 19:20:02.214187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.878 [2024-07-12 19:20:02.214215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.878 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.214543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.214574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.214724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.214754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.214946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.214976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.215120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.215151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.215348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.215381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.215562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.215591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.215768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.215798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.215923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.215952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.216072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.216101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.216206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.216252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.216375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.216405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.216530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.216559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.216682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.216712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.216843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.216873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.216991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.217021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.217139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.217169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.217312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.217343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.217552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.217583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.217692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.217722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.217999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.218028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.218149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.218179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.218312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.218343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.218592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.218622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.218757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.218787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.218907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.218937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.219116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.219146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.219272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.219305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.219488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.219519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.219746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.219776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.219957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.219993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.220110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.220140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.220346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.220377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.220556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.220587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.220767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.220797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.220994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.221023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.221213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.221256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.221441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.221471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.221718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.221747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.221872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.221903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.222078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.879 [2024-07-12 19:20:02.222107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.879 qpair failed and we were unable to recover it. 00:27:59.879 [2024-07-12 19:20:02.222295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.222326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.222434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.222463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.222662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.222691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.222815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.222846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.223036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.223066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.223200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.223241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.223355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.223385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.223559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.223589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.223698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.223728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.223993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.224022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.224125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.224155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.224331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.224363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.224570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.224600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.224773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.224803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.225024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.225053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.225266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.225297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.225491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.225526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.225647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.225676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.225872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.225902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.226155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.226185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.226473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.226504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.226624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.226653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.226789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.226818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.226940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.226969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.227076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.227105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.227280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.227311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.227502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.227531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.227809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.227838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.228111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.228140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.228333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.228370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.228647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.228676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.228856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.228887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.229073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.229102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.229241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.880 [2024-07-12 19:20:02.229272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.880 qpair failed and we were unable to recover it. 00:27:59.880 [2024-07-12 19:20:02.229518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.229548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.229689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.229719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.229912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.229942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.230128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.230157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.230354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.230385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.230560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.230589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.230852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.230881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.231068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.231097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.231210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.231249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.231432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.231462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.231586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.231615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.231889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.231919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.232180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.232210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.232421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.232452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.232662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.232691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.232830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.232859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.233130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.233160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.233359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.233390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.233634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.233664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.233978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.234007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.234200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.234238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.234499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.234528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.234758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.234788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.234985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.235015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.235262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.235292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.235486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.235515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.235703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.235732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.235930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.235959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.236148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.236177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.236489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.236520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.236726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.236756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.237003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.237032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.237278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.237309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.237584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.237613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.237864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.237893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.238163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.238198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.238492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.238522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.238795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.238825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.239120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.239149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.239425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.239456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.881 [2024-07-12 19:20:02.239679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.881 [2024-07-12 19:20:02.239708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.881 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.239953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.239982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.240258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.240290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.240582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.240611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.240888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.240917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.241102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.241130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.241320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.241351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.241544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.241573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.241842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.241872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.242157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.242187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.242472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.242503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.242692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.242721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.242985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.243014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.243320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.243351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.243627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.243657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.243869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.243898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.244153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.244182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.244431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.244461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.244589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.244616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.244801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.244830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.245017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.245046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.245308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.245339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.245592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.245623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.245874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.245903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.246176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.246206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.246467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.246498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.246750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.246779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.246958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.246988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.247213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.247252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.247526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.247556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.247768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.247798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.248042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.248072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.248352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.248384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.248590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.248620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.248815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.248846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.249103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.249139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.249432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.249464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.249603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.249631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.249835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.249864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.250116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.250146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.250375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.250406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.882 qpair failed and we were unable to recover it. 00:27:59.882 [2024-07-12 19:20:02.250537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.882 [2024-07-12 19:20:02.250566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.250744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.250773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.250918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.250947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.251124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.251153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.251341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.251371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.251564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.251595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.251790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.251820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.252019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.252049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.252382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.252417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.252620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.252651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.252789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.252817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.253067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.253097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.253284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.253315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.253451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.253479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.253670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.253700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.253966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.253996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.254245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.254282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.254486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.254515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.254789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.254818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.255089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.255119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.255312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.255343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.255578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.255607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.255853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.255883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.256156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.256185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.256448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.256478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.256668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.256698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.256828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.256857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.257096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.257125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.257350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.257381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.257525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.257555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.257836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.257865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.258144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.258173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.258451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.258483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.258685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.258715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.258964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.258999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.259195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.259235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.259488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.259518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.259726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.259757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.259910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.259939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.260247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.260279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.260418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.883 [2024-07-12 19:20:02.260449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.883 qpair failed and we were unable to recover it. 00:27:59.883 [2024-07-12 19:20:02.260645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.260674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.260873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.260903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.261192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.261221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.261514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.261545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.261759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.261789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.261987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.262017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.262206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.262250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.262398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.262428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.262727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.262757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.263043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.263072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.263289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.263320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.263569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.263599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.263868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.263897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.264203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.264244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.264505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.264535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.264752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.264782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.265122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.265151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.265436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.265467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.265718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.265748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.265941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.265970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.266253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.266285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.266414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.266443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.266662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.266692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.266879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.266909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.267114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.267144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.267356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.267388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.267565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.267595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.267849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.267878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.268173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.268203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.268435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.268466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.268666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.268695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.268840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.268870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.269066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.269095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.269220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.269284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.269467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.884 [2024-07-12 19:20:02.269496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.884 qpair failed and we were unable to recover it. 00:27:59.884 [2024-07-12 19:20:02.269708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.269738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.269931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.269961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.270243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.270274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.270524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.270555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.270818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.270847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.271032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.271061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.271269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.271299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.271479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.271509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.271761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.271791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.271932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.271962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.272246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.272277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.272529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.272558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.272832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.272862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.273113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.273144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.273352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.273383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.273642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.273672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.274010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.274041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.274318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.274349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.274640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.274669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.274968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.274998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.275189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.275220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.275475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.275506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.275788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.275819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.275996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.276025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.276256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.276287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.276588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.276621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.276774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.276803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.277057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.277087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.277340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.277371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.277498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.277528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.277737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.277767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.278015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.278045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.278246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.278277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.278529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.278559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.278744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.278774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.278980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.279009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.279248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.279282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.279559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.279592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.279727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.279762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.280035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.280069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.885 qpair failed and we were unable to recover it. 00:27:59.885 [2024-07-12 19:20:02.280281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.885 [2024-07-12 19:20:02.280315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.280530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.280559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.280810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.280839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.281015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.281044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.281255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.281290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.281486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.281516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.281758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.281789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.281911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.281941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.282079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.282109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.282380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.282411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.282619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.282649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.282830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.282860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.283123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.283153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.283376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.283407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.283627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.283656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.283832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.283862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.284136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.284166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.284392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.284423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.284553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.284582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.284794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.284824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.285111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.285140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.285268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.285300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.285503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.285534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.285680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.285710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.285901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.285931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.286121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.286151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.286331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.286361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.286559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.286588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.286785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.286814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.286954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.286983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.287192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.287221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.287513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.287543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.287676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.287705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.287906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.287935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.288237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.288269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.288463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.288493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.288629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.288658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.288858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.288888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.289013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.289048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.289251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.289283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.289469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.886 [2024-07-12 19:20:02.289499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.886 qpair failed and we were unable to recover it. 00:27:59.886 [2024-07-12 19:20:02.289691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.289721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.289941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.289970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.290244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.290276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.290419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.290449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.290632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.290662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.290946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.290976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.291261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.291291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.291504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.291534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.291715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.291744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.291948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.291977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.292236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.292267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.292456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.292487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.292681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.292710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.292999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.293029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.293156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.293185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.293480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.293510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.293712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.293741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.293919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.293948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.294206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.294245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.294524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.294554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.294751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.294780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.295040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.295070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.295257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.295289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.295403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.295432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.295564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.295598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.295886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.295915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.296188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.296218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.296433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.296463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.296737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.296765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.296947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.296977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.297178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.297208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.297423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.297453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.297663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.297693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.297906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.297936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.298191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.298222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.298362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.298392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.298581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.298610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.298871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.298900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.299154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.299184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.299520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.299552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.887 [2024-07-12 19:20:02.299748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.887 [2024-07-12 19:20:02.299778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.887 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.299992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.300021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.300203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.300243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.300429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.300458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.300665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.300694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.300943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.300973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.301164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.301193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.301409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.301440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.301708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.301737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.301992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.302021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.302213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.302254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.302530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.302562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.302749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.302778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.303059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.303088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.303356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.303387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.303663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.303692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.303915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.303945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.304200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.304243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.304437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.304467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.304647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.304677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.304872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.304901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.305171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.305201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.305506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.305537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.305806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.305835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.306017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.306052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.306324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.306355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.306635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.306664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.306958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.306988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.307186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.307215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.307453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.307483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.307751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.307780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.307980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.308010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.308276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.308307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.308582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.308612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.308834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.308863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.888 [2024-07-12 19:20:02.309051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.888 [2024-07-12 19:20:02.309080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.888 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.309382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.309413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.309628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.309657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.309794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.309824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.310039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.310070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.310265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.310296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.310446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.310476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.310753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.310782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.310924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.310953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.311236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.311267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.311431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.311461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.311709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.311739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.311884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.311914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.312112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.312141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.312409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.312440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.312575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.312604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.312799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.312829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.313017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.313046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.313314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.313345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.313534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.313564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.313792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.313821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.314122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.314152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.314360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.314390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.314674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.314704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.314961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.314991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.315245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.315276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.315498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.315528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.315780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.315810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.315987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.316017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.316239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.316275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.316406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.316435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.316618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.316647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.316789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.316818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.317020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.317049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.317313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.317344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.317644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.317674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.317943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.317972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.318249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.318280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.318479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.318508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.318806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.318836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-07-12 19:20:02.319036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.889 [2024-07-12 19:20:02.319065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.319264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.319295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.319483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.319513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.319713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.319742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.319927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.319957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.320219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.320259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.320507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.320537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.320786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.320815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.320995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.321025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.321223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.321281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.321461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.321491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.321766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.321796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.321992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.322021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.322298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.322329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.322544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.322573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.322701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.322730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.323014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.323044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.323353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.323385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.323646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.323676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.323788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.323817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.324071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.324100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.324374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.324405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.324608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.324637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.324768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.324797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.324978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.325007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.325210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.325251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.325360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.325389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.325657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.325686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.325936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.325965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.326247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.326287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.326482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.326512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.326759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.326788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.326979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.327008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.327204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.327245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.327364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.327393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.327614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.327644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.327836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.327865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.328051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.328080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.328331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.328362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.328569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.328599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.328847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.328876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-07-12 19:20:02.329157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.890 [2024-07-12 19:20:02.329186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.329481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.329511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.329632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.329662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.329914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.329943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.330254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.330286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.330566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.330595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.330846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.330875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.331132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.331163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.331344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.331374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.331555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.331584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.331869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.331898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.332168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.332197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.332440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.332470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.332763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.332793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.332969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.332999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.333263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.333294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.333440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.333469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.333658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.333688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.333889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.333918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.334182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.334211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.334409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.334439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.334715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.334746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.334946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.334975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.335237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.335267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.335544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.335574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.335718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.335747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.335936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.335966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.336183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.336212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.336346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.336382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.336578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.336608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.336724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.336753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.336971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.337000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.337200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.337256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.337453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.337483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.337667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.337696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.337894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.337923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.338125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.338155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.338344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.338375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.338581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.338611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.338790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.338820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-07-12 19:20:02.339063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.891 [2024-07-12 19:20:02.339091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.339360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.339391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.339583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.339613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.339740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.339768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.340044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.340073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.340294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.340325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.340610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.340639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.340845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.340874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.341074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.341103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.341373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.341404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.341596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.341625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.341776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.341806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.342054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.342084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.342279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.342310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.342490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.342519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.342821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.342850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.343028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.343058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.343256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.343287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.343489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.343518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.343773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.343802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.344094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.344124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.344457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.344487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.344706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.344735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.344927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.344957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.345131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.345160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.345385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.345416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.345590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.345619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.345893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.345922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.346173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.346208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.346422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.346452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.346631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.346660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.346813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.346842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.347035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.347064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.347268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.347298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.347610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.347640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.347767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.347797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.348048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.348077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.348327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.348358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.348607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.348637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.892 qpair failed and we were unable to recover it. 00:27:59.892 [2024-07-12 19:20:02.348771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.892 [2024-07-12 19:20:02.348800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.348987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.349016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.349266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.349298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.349602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.349634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.349904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.349934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.350242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.350273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.350526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.350556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.350816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.350846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.351041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.351071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.351250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.351281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.351559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.351588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.351796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.351825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.352065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.352095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.352240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.352271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.352451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.352480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.352752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.352782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.353036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.353065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.353198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.353239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.353490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.353520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.353707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.353736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.354008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.354037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.354253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.354284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.354552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.354581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.354835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.354865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.355047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.355076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.355213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.355257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.355449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.355478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.355751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.355781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.356077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.356106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.356320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.356356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.356537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.356567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.356821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.356851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.357100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.357129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.357377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.357408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.357585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.357614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.357733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.357762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.357894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.357923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.893 qpair failed and we were unable to recover it. 00:27:59.893 [2024-07-12 19:20:02.358108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.893 [2024-07-12 19:20:02.358138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.358316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.358347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.358460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.358489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.358667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.358696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.358902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.358931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.359074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.359103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.359328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.359359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.359483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.359513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.359641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.359671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.359923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.359952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.360148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.360177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.360372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.360403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.360599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.360629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.360811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.360840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.361030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.361059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.361168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.361197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.361505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.361536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.361757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.361786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.362083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.362112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.362378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.362410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.362608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.362638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.362914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.362943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.363212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.363252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.363452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.363483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.363751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.363782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.363895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.363924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.364202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.364242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.364443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.364473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.364772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.364802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.365102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.365132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.365334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.365365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.365620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.365648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.365944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.365978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.366207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.366249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.366559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.366589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.366864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.366894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.367084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.367114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.367322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.367353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.367560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.367589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.367779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.367809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.368057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.894 [2024-07-12 19:20:02.368086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.894 qpair failed and we were unable to recover it. 00:27:59.894 [2024-07-12 19:20:02.368388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.368418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.368635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.368664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.368879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.368908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.369181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.369211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.369486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.369517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.369724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.369754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.370052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.370082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.370298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.370329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.370529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.370559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.370824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.370853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.371034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.371064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.371188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.371217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.371508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.371538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.371820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.371850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.372143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.372172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.372440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.372471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.372709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.372739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.372998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.373026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.373243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.373275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.373548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.373578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.373865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.373895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.374196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.374237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.374390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.374419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.374667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.374696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.375012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.375041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.375303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.375334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.375642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.375671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.375941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.375971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.376267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.376298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.376512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.376542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.376733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.376765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.376965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.377004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.377188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.377217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.377507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.377538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.377717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.377746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.377862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.377891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.378161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.378190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.378331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.378362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.378561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.378591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.378814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.378843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.895 [2024-07-12 19:20:02.379029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.895 [2024-07-12 19:20:02.379058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.895 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.379276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.379307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.379584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.379614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.379903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.379933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.380131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.380161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.380357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.380388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.380578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.380608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.380754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.380783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.381005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.381035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.381247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.381277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.381468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.381499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.381693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.381723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.381912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.381942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.382196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.382235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.382433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.382464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.382656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.382687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.383012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.383042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.383191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.383221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.383521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.383552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.383665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.383694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.383814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.383844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.384027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.384056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.384322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.384354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.384657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.384687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.384970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.385003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.385210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.385260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.385449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.385479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.385751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.385781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.385912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.385941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.386212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.386254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.386513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.386543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.386834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.386871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.387066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.387096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.387324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.387354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.387604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.387634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.387883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.387912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.388171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.388200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.388346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.388377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.388518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.388548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.388747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.388776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.389047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.389076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.896 qpair failed and we were unable to recover it. 00:27:59.896 [2024-07-12 19:20:02.389284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.896 [2024-07-12 19:20:02.389314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.389553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.389583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.389831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.389861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.390047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.390076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.390281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.390312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.390462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.390492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.390610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.390639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.390914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.390943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.391217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.391258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.391370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.391400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.391590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.391620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.391884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.391914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.392095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.392124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.392263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.392294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.392506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.392535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.392685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.392714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.392984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.393013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.393216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa000 is same with the state(5) to be set 00:27:59.897 [2024-07-12 19:20:02.393492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.393569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.393800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.393835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.394062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.394092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.394377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.394410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.394661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.394692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.395029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.395059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.395354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.395386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.395664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.395694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.395983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.396013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.396321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.396352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.396546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.396577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.396770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.396799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.397050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.397080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.397356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.397388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.397594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.397624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.397758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.397788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.398109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.398139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.897 [2024-07-12 19:20:02.398410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.897 [2024-07-12 19:20:02.398442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.897 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.398665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.398694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.398979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.399009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.399207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.399248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.399501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.399531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.399757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.399787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.400004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.400034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.400169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.400199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.400478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.400526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.400742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.400783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.401064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.401094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.401240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.401272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.401477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.401507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.401784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.401813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.402075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.402104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.402370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.402400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.402700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.402729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.402960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.402991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.403253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.403284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.403533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.403563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.403782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.403811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.404081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.404110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.404347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.404377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.404581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.404613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.404893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.404924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.405152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.405182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.405444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.405474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.405733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.405763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.405963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.405993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.406263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.406294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.406563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.406592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.406781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.406812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.407092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.407121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.407402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.407433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.407651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.407682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.407830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.407860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.408126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.408156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.408301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.408332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.408463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.408494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.408755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.408785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.409047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.409077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.409372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.898 [2024-07-12 19:20:02.409404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.898 qpair failed and we were unable to recover it. 00:27:59.898 [2024-07-12 19:20:02.409616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.409646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.409849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.409878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.410126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.410156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.410429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.410461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.410737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.410767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.410980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.411009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.411290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.411322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.411504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.411533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.411912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.411988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.412294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.412330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.412608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.412640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.412946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.412977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.413264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.413297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.413503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.413534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.413713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.413743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.413951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.413981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.414262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.414293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.414535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.414565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.414777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.414807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.415021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.415051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.415328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.415360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.415565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.415606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.415822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.415853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.416124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.416156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.416457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.416488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.416636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.416667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.416817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.416846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.417101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.417131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.417255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.417286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.417569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.417598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.417801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.417831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.418072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.418102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.418246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.418277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.418478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.418508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.418711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.418740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.419037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.419068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.419253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.419284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.419480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.419510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.419721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.419751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.419930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.419959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.899 qpair failed and we were unable to recover it. 00:27:59.899 [2024-07-12 19:20:02.420207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-07-12 19:20:02.420259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:27:59.900 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.420581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.420613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.420888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.420919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.421118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.421148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.421351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.421382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.421653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.421683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.421938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.421968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.422267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.422298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.422556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.422588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.422802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.422832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.423053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.423083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.423381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.423412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.423530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.423560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.423857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.423889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.424162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.424191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.424445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.424476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.424787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.424817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.425084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.425113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.425314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.425345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.425545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.425575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.425756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.425785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.177 qpair failed and we were unable to recover it. 00:28:00.177 [2024-07-12 19:20:02.426031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.177 [2024-07-12 19:20:02.426067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.426257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.426288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.426568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.426598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.426871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.426901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.427187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.427216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.427506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.427536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.427816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.427846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.428122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.428151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.428394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.428426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.428702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.428731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.428995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.429025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.429201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.429254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.429447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.429477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.429673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.429704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.429992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.430023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.430298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.430329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.430532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.430562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.430814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.430845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.431092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.431121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.431301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.431332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.431604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.431635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.431758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.431789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.431990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.432020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.432271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.432302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.432558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.432588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.432839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.432869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.433146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.433176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.433477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.433514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.433777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.433807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.434032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.434062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.434336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.434367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.434662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.434692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.434909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.434939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.435188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.435218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.435419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.435450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.435588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.435618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.435891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.435922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.436215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.178 [2024-07-12 19:20:02.436256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.178 qpair failed and we were unable to recover it. 00:28:00.178 [2024-07-12 19:20:02.436526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.436556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.436753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.436783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.437044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.437074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.437278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.437310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.437519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.437549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.437821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.437851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.438048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.438078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.438341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.438372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.438640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.438670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.438926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.438956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.439254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.439285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.439556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.439587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.439884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.439915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.440194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.440232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.440508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.440539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.440828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.440859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.441146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.441176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.441460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.441491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.441677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.441707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.441980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.442010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.442188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.442218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.442503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.442534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.442758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.442788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.443036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.443067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.443335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.443366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.443661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.443690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.443898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.443927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.444119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.444148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.444371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.444401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.444685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.444720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.444980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.445010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.445313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.445344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.445634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.445665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.445937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.445966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.446188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.446217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.446481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.446512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.446768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.446797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.447061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.179 [2024-07-12 19:20:02.447091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.179 qpair failed and we were unable to recover it. 00:28:00.179 [2024-07-12 19:20:02.447304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.447336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.447589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.447619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.447767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.447796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.447992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.448021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.448220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.448260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.448574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.448604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.448857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.448888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.449019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.449050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.449259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.449291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.449548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.449579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.449760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.449789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.450086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.450115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.450389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.450420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.450691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.450721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.450997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.451030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.451299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.451331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.451601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.451631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.451849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.451879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.452076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.452107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.452364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.452396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.452650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.452679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.452881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.452910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.453180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.453210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.453444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.453475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.453725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.453755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.454026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.454055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.454353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.454384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.454658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.454689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.454983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.455013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.455291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.455323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.455546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.455576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.455772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.455807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.455955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.455985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.456259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.456291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.456487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.456517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.456779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.456809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.457110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.457140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.457345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.180 [2024-07-12 19:20:02.457376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.180 qpair failed and we were unable to recover it. 00:28:00.180 [2024-07-12 19:20:02.457570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.457600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.457875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.457904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.458197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.458236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.458507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.458538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.458719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.458749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.458931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.458961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.459153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.459183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.459478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.459509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.459752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.459783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.460034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.460064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.460378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.460410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.460655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.460686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.460932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.460961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.461259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.461291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.461500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.461531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.461720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.461751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.462028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.462058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.462338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.462369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.462593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.462623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.462825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.462856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.463051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.463082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.463263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.463293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.463573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.463603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.463874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.463904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.464099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.464128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.464334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.464365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.464617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.464647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.464823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.464854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.465156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.465185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.465494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.465525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.465819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.465850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.466121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.466151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.466453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.466484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.466754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.466789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.181 [2024-07-12 19:20:02.467087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.181 [2024-07-12 19:20:02.467116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.181 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.467388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.467419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.467691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.467722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.467973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.468003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.468269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.468300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.468490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.468519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.468741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.468770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.468967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.468998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.469221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.469274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.469473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.469504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.469755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.469786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.470033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.470063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.470313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.470345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.470551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.470582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.470855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.470885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.471184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.471214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.471488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.471519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.471737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.471768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.471951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.471981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.472257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.472289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.472562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.472592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.472883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.472913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.473174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.473203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.473475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.473506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.473686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.473717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.474014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.474044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.474317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.474349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.474533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.474563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.474839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.474869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.475158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.475188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.475412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.475443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.475713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.475743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.476046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.476076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.476350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.476381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.476675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.476705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.476961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.476990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.477245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.477276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.477526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.182 [2024-07-12 19:20:02.477557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.182 qpair failed and we were unable to recover it. 00:28:00.182 [2024-07-12 19:20:02.477833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.477864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.478160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.478195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.478467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.478499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.478784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.478814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.479062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.479092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.479271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.479302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.479549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.479579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.479689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.479719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.480004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.480034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.480311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.480342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.480553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.480584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.480813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.480844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.481096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.481125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.481397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.481428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.481642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.481672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.481975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.482005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.482196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.482235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.482447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.482478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.482749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.482779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.483055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.483084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.483383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.483414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.483689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.483718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.484000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.484029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.484155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.484185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.484494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.484525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.484786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.484816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.485125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.485155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.485423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.485455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.485743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.485774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.486055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.486084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.486283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.486313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.486587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.486617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.486792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.486822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.487097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.487128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.487427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.487458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.487731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.487761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.488064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.488094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.488272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.488303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.488503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.488532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.183 qpair failed and we were unable to recover it. 00:28:00.183 [2024-07-12 19:20:02.488784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.183 [2024-07-12 19:20:02.488813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.489063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.489093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.489368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.489404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.489656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.489687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.489886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.489916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.490163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.490192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.490405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.490437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.490710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.490740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.491004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.491034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.491289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.491320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.491574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.491604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.491881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.491911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.492206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.492245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.492356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.492386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.492577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.492606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.492787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.492817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.493098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.493128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.493402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.493434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.493699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.493729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.494029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.494058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.494338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.494369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.494617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.494647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.494843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.494873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.495071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.495101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.495278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.495309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.495584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.495614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.495921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.495952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.496265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.496296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.496584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.496615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.496870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.496901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.497168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.497199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.497474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.497505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.497800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.497831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.498126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.498155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.498347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.498379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.498629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.498659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.498903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.498933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.499114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.499144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-07-12 19:20:02.499445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.184 [2024-07-12 19:20:02.499475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.499740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.499770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.500060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.500091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.500279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.500309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.500515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.500551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.500760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.500790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.501041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.501071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.501371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.501402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.501612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.501642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.501843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.501873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.502145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.502175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.502404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.502437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.502688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.502717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.502997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.503027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.503256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.503287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.503539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.503569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.503869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.503898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.504105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.504134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.504320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.504351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.504547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.504578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.504829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.504859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.505038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.505068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.505315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.505346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.505599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.505628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.505845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.505875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.506167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.506196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.506520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.506552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.506831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.506862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.507127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.507156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.507456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.507488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.507767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.507797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.508013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.508044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.508169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.508199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.508405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.185 [2024-07-12 19:20:02.508436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-07-12 19:20:02.508627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.508657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.508784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.508813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.508995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.509025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.509292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.509323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.509506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.509536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.509754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.509784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.509959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.509989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.510113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.510143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.510326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.510357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.510632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.510662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.510939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.510974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.511262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.511294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.511511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.511541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.511729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.511758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.511948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.511977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.512245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.512275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.512576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.512605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.512824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.512854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.513124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.513153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.513431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.513462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.513584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.513613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.513817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.513847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.514125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.514155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.514423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.514454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.514649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.514679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.514862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.514891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.515002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.515032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.515246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.515278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.515470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.515499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.515746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.515776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.515958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.515989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.186 [2024-07-12 19:20:02.516173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.186 [2024-07-12 19:20:02.516203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.186 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.516490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.516521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.516780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.516809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.516996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.517026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.517280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.517311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.517568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.517598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.517852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.517883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.518084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.518114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.518362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.518394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.518584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.518613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.518882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.518911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.519209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.519250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.519463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.519494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.519691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.519721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.519929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.519960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.520140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.520170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.520462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.520494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.520796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.520826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.521105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.521136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.521389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.521426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.521725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.521755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.522048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.522078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.522277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.522308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.522591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.522621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.522906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.522936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.523135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.523165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.523423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.523454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.523705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.523735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.523916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.523946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.524174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.524203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.524422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.524453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.524642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.524672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.524944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.524975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.525252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.187 [2024-07-12 19:20:02.525283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.187 qpair failed and we were unable to recover it. 00:28:00.187 [2024-07-12 19:20:02.525555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.525585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.525900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.525931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.526133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.526162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.526355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.526386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.526585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.526615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.526894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.526925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.527172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.527203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.527395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.527426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.527676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.527706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.527903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.527934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.528205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.528245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.528375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.528405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.528678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.528753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.528976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.529010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.529215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.529259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.529538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.529569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.529842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.529873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.530080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.530110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.530359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.530390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.530594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.530624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.530872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.530901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.531156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.531185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.531384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.531415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.531610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.531639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.531887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.531916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.532175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.532214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.532429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.532459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.532585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.532614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.532762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.532792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.533065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.533096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.533289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.533320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.533438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.533469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.533739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.533769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.534041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.534072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.534210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.188 [2024-07-12 19:20:02.534249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.188 qpair failed and we were unable to recover it. 00:28:00.188 [2024-07-12 19:20:02.534544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.534574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.534770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.534800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.535036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.535065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.535316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.535347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.535602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.535632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.535845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.535876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.536103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.536133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.536412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.536442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.536575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.536605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.536820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.536850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.537038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.537068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.537258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.537289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.537495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.537525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.537705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.537735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.537912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.537942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.538134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.538165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.538358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.538389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.538590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.538620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.538838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.538868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.539068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.539099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.539357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.539388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.539642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.539673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.539888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.539918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.540218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.540257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.540463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.540493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.540618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.540647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.540792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.540822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.540940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.540969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.541196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.541236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.541511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.541540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.541661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.541696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.541949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.541979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.542193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.542222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.542512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.189 [2024-07-12 19:20:02.542542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.189 qpair failed and we were unable to recover it. 00:28:00.189 [2024-07-12 19:20:02.542826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.542856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.543119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.543149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.543451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.543482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.543755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.543786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.544088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.544118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.544321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.544352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.544552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.544581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.544859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.544889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.545075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.545105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.545305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.545336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.545548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.545579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.545852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.545883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.546179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.546208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.546409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.546440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.546622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.546652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.546853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.546883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.547093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.547123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.547328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.547358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.547583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.547613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.547883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.547913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.548165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.548195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.548423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.548454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.548711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.548742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.548952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.548983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.549257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.549287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.549564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.549594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.549887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.549917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.550141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.550170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.550372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.550403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.550524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.550554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.550756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.550785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.551012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.551042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.551360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.551391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.551646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.551677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.551942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.551971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.552119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.552149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.552405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.552441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.552738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.552768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.552902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.552931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.553041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.190 [2024-07-12 19:20:02.553071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.190 qpair failed and we were unable to recover it. 00:28:00.190 [2024-07-12 19:20:02.553370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.553401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.553638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.553668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.553866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.553895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.554192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.554221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.554523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.554554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.554824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.554853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.555151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.555182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.555411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.555442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.555693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.555722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.555905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.555934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.556143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.556172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.556454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.556485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.556761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.556791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.557084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.557114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.557244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.557275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.557494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.557524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.557843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.557872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.558062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.558091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.558331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.558362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.558569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.558599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.558773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.558803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.559077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.559106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.559357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.559388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.559598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.559629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.559899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.559928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.560221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.560261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.560490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.560520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.560793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.560822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.561007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.561037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.561285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.561315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.561518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.561548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.561800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.561829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.562100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.562129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.191 [2024-07-12 19:20:02.562387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.191 [2024-07-12 19:20:02.562418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.191 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.562612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.562643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.562918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.562947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.563245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.563281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.563547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.563577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.563758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.563787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.563999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.564029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.564279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.564309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.564535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.564565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.564823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.564852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.565112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.565142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.565337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.565368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.565617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.565647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.565907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.565936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.566176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.566205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.566476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.566507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.566780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.566810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.567066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.567096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.567293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.567323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.567614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.567644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.567893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.567922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.568170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.568200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.568391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.568422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.568670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.568701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.568975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.569004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.569186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.569215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.569501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.569531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.569803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.569832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.570128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.570158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.570440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.570471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.570751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.570785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.571013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.571043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.571252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.571282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.571532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.571562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.571809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.571839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.572087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.572116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.572390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.572420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.572602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.572632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.572844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.572874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.573149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.573178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.573396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.192 [2024-07-12 19:20:02.573427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.192 qpair failed and we were unable to recover it. 00:28:00.192 [2024-07-12 19:20:02.573677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.573707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.573959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.573988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.574265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.574296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.574576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.574606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.574811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.574841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.575117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.575146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.575440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.575472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.575675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.575706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.575890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.575919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.576134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.576164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.576437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.576467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.576680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.576710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.576958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.576988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.577261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.577291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.577587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.577617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.577837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.577867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.578139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.578169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.578450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.578482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.578766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.578796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.579009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.579039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.579239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.579271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.579546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.579576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.579856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.579885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.580125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.580155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.580409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.580440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.580669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.580699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.580975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.581004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.581120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.581150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.581395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.581426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.581674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.581710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.581854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.581882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.582099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.582129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.582273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.582304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.582500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.582530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.582780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.582810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.583110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.583140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.583359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.583391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.583572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.583601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.583849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.583879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.584056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.584085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.193 qpair failed and we were unable to recover it. 00:28:00.193 [2024-07-12 19:20:02.584385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.193 [2024-07-12 19:20:02.584416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.584716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.584745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.584940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.584969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.585235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.585266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.585466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.585496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.585767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.585796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.585986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.586016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.586212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.586251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.586453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.586483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.586755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.586784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.587049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.587079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.587335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.587367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.587577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.587606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.587873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.587903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.588180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.588210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.588509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.588540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.588811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.588842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.589136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.589167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.589376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.589408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.589678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.589708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.590009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.590039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.590315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.590346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.590645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.590674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.590816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.590846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.591038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.591068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.591191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.591221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.591484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.591514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.591805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.591834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.592012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.592042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.592316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.592352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.592630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.592660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.592857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.592887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.593111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.593141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.593324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.593355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.593549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.593579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.593849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.593878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.594074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.594104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.594370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.594401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.594601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.594631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.594895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.594926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.595173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.194 [2024-07-12 19:20:02.595202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.194 qpair failed and we were unable to recover it. 00:28:00.194 [2024-07-12 19:20:02.595427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.595458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.595673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.595703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.595885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.595915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.596112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.596142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.596372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.596403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.596697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.596727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.596975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.597005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.597193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.597222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.597413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.597443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.597657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.597687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.597944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.597973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.598244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.598276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.598475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.598505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.598769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.598798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.599095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.599124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.599404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.599436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.599710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.599739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.599921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.599951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.600131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.600159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.600384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.600416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.600616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.600645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.600920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.600949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.601247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.601278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.601473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.601503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.601798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.601828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.602079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.602108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.602381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.602412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.602619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.602649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.602901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.602936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.603116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.603145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.603395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.603425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.603623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.603653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.603852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.603881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.604133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.604164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.604383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.195 [2024-07-12 19:20:02.604413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.195 qpair failed and we were unable to recover it. 00:28:00.195 [2024-07-12 19:20:02.604660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.604690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.604878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.604908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.605115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.605144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.605337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.605368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.605566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.605595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.605876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.605906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.606199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.606238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.606513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.606542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.606790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.606820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.607027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.607057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.607307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.607338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.607610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.607641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.607854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.607884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.608182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.608212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.608508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.608538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.608816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.608845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.609037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.609067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.609334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.609365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.609664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.609694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.609957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.609987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.610295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.610328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.610591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.610621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.610911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.610941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.611238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.611270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.611484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.611513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.611753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.611782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.611999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.612029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.612302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.612333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.612625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.612655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.612863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.612892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.613140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.613170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.613454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.613485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.613688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.613718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.613925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.613960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.614136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.614165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.614432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.614463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.614712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.614741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.615014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.615043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.615290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.615321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.615512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.615541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.196 qpair failed and we were unable to recover it. 00:28:00.196 [2024-07-12 19:20:02.615818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.196 [2024-07-12 19:20:02.615848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.616115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.616146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.616408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.616440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.616642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.616671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.616835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.616866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.617174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.617205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.617480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.617511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.617793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.617823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.618113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.618143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.618447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.618478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.618746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.618776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.619029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.619059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.619309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.619340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.619562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.619592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.619813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.619842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.620091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.620121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.620313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.620344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.620622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.620652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.620938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.620969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.621117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.621147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.621432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.621464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.621641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.621670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.621944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.621974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.622154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.622184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.622488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.622520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.622786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.622816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.623121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.623150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.623420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.623451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.623703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.623732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.623982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.624011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.624268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.624299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.624482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.624512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.624785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.624814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.624963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.624999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.625297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.625328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.625601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.625631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.625924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.625954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.626141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.626170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.626472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.626503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.626772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.626802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.627100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.197 [2024-07-12 19:20:02.627129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.197 qpair failed and we were unable to recover it. 00:28:00.197 [2024-07-12 19:20:02.627349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.627380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.627674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.627704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.627957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.627986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.628309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.628340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.628611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.628641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.628940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.628970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.629259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.629291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.629568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.629597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.629808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.629838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.630018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.630047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.630327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.630358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.630608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.630638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.630903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.630934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.631242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.631273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.631510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.631540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.631816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.631846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.632060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.632090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.632388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.632419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.632647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.632677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.632941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.632971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.633093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.633123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.633323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.633354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.633532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.633562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.633772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.633802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.634051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.634081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.634330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.634361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.634541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.634571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.634826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.634856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.635081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.635111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.635312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.635344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.635521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.635551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.635831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.635861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.636140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.636175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.636386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.636417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.636614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.636644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.636894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.636924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.637103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.637133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.637425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.637456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.637662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.637692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.637815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.637845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.198 [2024-07-12 19:20:02.638114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.198 [2024-07-12 19:20:02.638143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.198 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.638344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.638375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.638635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.638664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.638888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.638918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.639165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.639195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.639503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.639536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.639730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.639760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.639907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.639937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.640207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.640249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.640448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.640479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.640745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.640775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.640969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.640999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.641173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.641203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.641466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.641497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.641634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.641664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.641909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.641939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.642261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.642292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.642544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.642574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.642847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.642876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.643174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.643204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.643480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.643510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.643702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.643732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.644008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.644038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.644245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.644277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.644554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.644584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.644768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.644799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.645051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.645081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.645282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.645313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.645574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.645604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.645784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.645814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.646025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.646054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.646200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.646242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.646520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.646555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.646752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.646783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.647038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.647068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.647256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.647287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.647537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.647567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.647786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.647816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.647930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.647959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.648185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.648215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.648499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.199 [2024-07-12 19:20:02.648529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.199 qpair failed and we were unable to recover it. 00:28:00.199 [2024-07-12 19:20:02.648743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.648774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.649071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.649100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.649402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.649433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.649724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.649754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.650029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.650058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.650328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.650359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.650540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.650570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.650789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.650819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.651096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.651127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.651346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.651379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.651577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.651607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.651784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.651813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.652089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.652119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.652315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.652347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.652546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.652576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.652875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.652905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.653200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.653240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.653539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.653570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.653853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.653884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.654170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.654199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.654339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.654370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.654632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.654661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.654909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.654940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.655207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.655261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.655407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.655438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.655716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.655749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.656018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.656048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.656239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.656270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.656469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.656500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.656700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.656730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.656983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.657014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.657290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.657326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.657608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.657638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.657892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.657922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.658183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.200 [2024-07-12 19:20:02.658213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.200 qpair failed and we were unable to recover it. 00:28:00.200 [2024-07-12 19:20:02.658435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.658465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.658719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.658750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.658879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.658909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.659188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.659217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.659359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.659389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.659584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.659614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.659790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.659819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.660016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.660046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.660297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.660328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.660573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.660603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.660807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.660838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.660977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.661006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.661257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.661289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.661592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.661621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.661878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.661908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.662123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.662153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.662427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.662458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.662739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.662768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.662985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.663014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.663206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.663260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.663537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.663567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.663793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.663823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.664096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.664127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.664329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.664361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.664488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.664517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.664714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.664744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.665000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.665030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.665281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.665312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.665512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.665543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.665722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.665752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.666002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.666032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.666242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.666273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.666469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.666499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.666775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.666804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.667002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.667032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.667294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.667325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.667622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.667659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.667869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.667899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.668176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.668206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.668507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.668539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.201 [2024-07-12 19:20:02.668732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.201 [2024-07-12 19:20:02.668761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.201 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.669026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.669057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.669246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.669276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.669475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.669505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.669684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.669713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.669915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.669945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.670092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.670121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.670389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.670421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.670609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.670640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.670815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.670846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.671109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.671139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.671352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.671386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.671674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.671704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.671994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.672025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.672166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.672196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.672459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.672490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.672690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.672720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.672923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.672952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.673154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.673185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.673318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.673349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.673528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.673558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.673774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.673804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.674089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.674118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.674373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.674406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.674668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.674699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.674914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.674947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.675203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.675243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.675385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.675415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.675635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.675665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.675947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.675977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.676185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.676215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.676406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.676437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.676685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.676715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.676990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.677019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.677213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.677251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.677500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.677530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.677708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.677742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.677967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.202 [2024-07-12 19:20:02.677997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.202 qpair failed and we were unable to recover it. 00:28:00.202 [2024-07-12 19:20:02.678289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.678321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.678520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.678549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.678800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.678830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.679105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.679135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.679388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.679419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.679596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.679627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.679733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.679763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.680036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.680066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.680193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.680223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.680514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.680544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.680807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.680837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.681047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.681076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.681362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.681393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.681687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.681717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.681902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.681932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.682125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.682155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.682401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.682433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.682679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.682708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.682965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.682994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.683270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.683302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.683598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.683627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.683850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.683879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.684128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.684158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.684361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.684392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.684642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.684672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.684939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.684970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.685148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.685178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.685468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.685499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.685791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.685820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.686097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.686127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.686335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.686367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.686560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.686589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.686742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.686772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.687049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.687078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.687334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.203 [2024-07-12 19:20:02.687365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.203 qpair failed and we were unable to recover it. 00:28:00.203 [2024-07-12 19:20:02.687513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.687543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.687770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.687800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.688083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.688113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.688392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.688428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.688705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.688737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.689028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.689058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.689257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.689288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.689506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.689537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.689718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.689748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.690025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.690056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.690254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.690285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.690467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.690496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.690714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.690745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.690948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.690978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.691242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.691273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.691525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.691555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.691701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.691730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.691938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.691969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.692161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.692190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.692452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.692484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.692730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.692760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.693077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.693106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.693380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.693411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.693662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.693692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.693870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.693900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.694149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.694179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.694436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.694468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.694619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.694649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.694849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.694878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.695099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.695129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.695315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.695347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.695614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.695643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.695835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.695864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.204 [2024-07-12 19:20:02.696142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.204 [2024-07-12 19:20:02.696172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.204 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.696458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.696489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.696752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.696781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.697037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.697067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.697266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.697297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.697570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.697600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.697898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.697927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.698204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.698244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.698397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.698427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.698608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.698638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.698921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.698955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.699248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.699280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.699554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.699585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.699871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.699901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.700142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.700172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.700493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.700523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.700784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.700813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.700996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.701027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.701216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.701259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.701460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.701491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.701738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.701768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.701968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.701997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.702132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.702162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.702363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.702395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.702718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.702749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.703012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.703042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.703325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.703356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.703476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.703506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.703795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.703824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.704076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.704106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.704313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.704358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.704624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.704654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.704795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.704825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.705087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.705117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.705318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.705349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.205 qpair failed and we were unable to recover it. 00:28:00.205 [2024-07-12 19:20:02.705487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.205 [2024-07-12 19:20:02.705516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.705781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.705810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.706076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.706106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.706382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.706413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.706693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.706724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.706840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.706869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.707047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.707077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.707234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.707265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.707447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.707477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.707729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.707759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.707866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.707895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.708245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.708276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.708502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.708531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.708678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.708709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.708981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.709011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.709214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.709254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.709530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.709560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.709704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.709734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.709935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.709965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.710153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.710184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.710373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.710404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.710526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.710556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.710733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.710763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.710981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.711010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.711216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.711273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.711379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.711409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.711518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.711548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.711762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.711791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.711972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.712001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.712266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.712298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.712488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.712519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.712726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.712756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.206 [2024-07-12 19:20:02.712951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.206 [2024-07-12 19:20:02.712981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.206 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.713090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.713120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.713302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.713333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.713478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.713507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.713640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.713670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.713877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.713908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.714158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.714187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.714335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.714366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.714514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.714543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.714797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.714826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.715006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.715041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.715315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.715346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.715479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.715508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.715700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.715730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.715857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.715887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.716018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.716048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.716171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.716200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.716478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.716508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.716700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.716731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.716925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.716954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.717092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.717122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.717251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.717283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.717554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.717583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.717773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.717802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.717998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.718028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.718213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.718254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.718384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.718414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.718543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.718573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.718751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.718780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.719028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.719059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.719194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.719248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.719473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.719504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.719684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.719714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.719899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.719928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.720176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.720206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.207 [2024-07-12 19:20:02.720339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.207 [2024-07-12 19:20:02.720370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.207 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.720502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.720532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.720735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.720765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.720887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.720917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.721173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.721203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.721357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.721387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.721585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.721614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.721749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.721779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.721899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.721929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.722106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.722136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.722413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.722444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.722584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.722615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.722745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.722774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.722970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.723000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.723125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.723155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.723353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.723390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.723587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.723616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.723843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.723872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.724017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.724046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.724221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.724277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.724500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.724530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.724827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.724856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.724993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.725023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.725151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.725180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.725317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.725348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.725526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.725555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.725758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.725787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.725936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.725967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.726175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.726204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.208 [2024-07-12 19:20:02.726425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.208 [2024-07-12 19:20:02.726457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.208 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.726656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.486 [2024-07-12 19:20:02.726688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.486 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.726938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.486 [2024-07-12 19:20:02.726979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.486 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.727197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.486 [2024-07-12 19:20:02.727238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.486 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.727488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.486 [2024-07-12 19:20:02.727518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.486 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.727713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.486 [2024-07-12 19:20:02.727742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.486 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.727865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.486 [2024-07-12 19:20:02.727894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.486 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.728136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.486 [2024-07-12 19:20:02.728167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.486 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.728451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.486 [2024-07-12 19:20:02.728482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.486 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.728759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.486 [2024-07-12 19:20:02.728789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.486 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.729037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.486 [2024-07-12 19:20:02.729067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.486 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.729313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.486 [2024-07-12 19:20:02.729345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.486 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.729640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.486 [2024-07-12 19:20:02.729670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.486 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.729949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.486 [2024-07-12 19:20:02.729979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.486 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.730166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.486 [2024-07-12 19:20:02.730196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.486 qpair failed and we were unable to recover it. 00:28:00.486 [2024-07-12 19:20:02.730397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.730429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.730624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.730653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.730844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.730874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.731107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.731136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.731351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.731381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.731523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.731553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.731774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.731803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.731908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.731937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.732189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.732218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.732408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.732438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.732681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.732711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.732914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.732950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.733131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.733160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.733352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.733383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.733520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.733550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.733738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.733768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.733956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.733985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.734125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.734155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.734408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.734439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.734654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.734684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.734954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.734984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.735250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.735281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.735470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.735500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.735745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.735776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.736018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.736048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.736249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.736281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.736495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.736524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.736747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.736777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.736993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.737022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.737207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.737261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.737505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.737535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.737677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.737706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.737959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.737988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.738186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.738216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.738491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.738523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.738765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.738795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.738988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.739017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.739159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.739190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.739336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.739367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.739582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.739612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.739736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.739766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.739879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.739909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.740092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.740121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.740366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.740396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.740705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.740735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.740927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.740956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.741166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.741195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.741412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.741444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.741655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.741684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.741799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.741828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.742003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.742033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.742220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.742266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.742455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.742485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.742678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.742707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.742900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.742930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.743049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.743078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.743288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.743318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.743519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.743550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.743815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.743845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.744084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.744115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.744223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.744264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.744483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.744514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.744759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.744789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.487 [2024-07-12 19:20:02.744962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.487 [2024-07-12 19:20:02.744992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.487 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.745112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.745141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.745273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.745304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.745543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.745574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.745771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.745801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.746067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.746097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.746272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.746303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.746480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.746510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.746728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.746758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.746885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.746915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.747118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.747148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.747355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.747386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.747519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.747548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.747730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.747760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.747861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.747891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.748103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.748133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.748310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.748340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.748521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.748551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.749219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.749262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.749584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.749614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.749854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.749885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.750073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.750103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.750303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.750335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.750458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.750487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.750689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.750719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.750909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.750938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.751131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.751160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.751366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.751397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.751531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.751572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.751833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.751863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.752053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.752082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.752286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.752317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.752490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.752519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.752759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.752788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.753028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.753057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.753175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.753204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.753419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.753450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.753635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.753665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.753849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.753878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.754063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.754093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.754365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.754396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.754685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.754714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.754894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.754923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.755105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.755135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.755344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.755374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.755615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.755645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.755836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.755865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.756069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.756099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.756394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.756426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.756696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.756727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.756918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.756965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.757089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.757119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.757313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.757344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.757527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.757556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.757728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.757757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.757960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.757990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.758119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.758149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.758279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.758310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.758421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.758450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.758633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.758662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.758834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.758863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.759035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.759064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.759190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.759219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.759441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.759471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.488 [2024-07-12 19:20:02.759611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.488 [2024-07-12 19:20:02.759640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.488 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.759811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.759839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.760048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.760077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.760203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.760251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.760494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.760529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.760719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.760749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.760868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.760897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.761089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.761118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.761322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.761353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.761477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.761506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.761641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.761671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.761864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.761893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.762016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.762045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.762164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.762192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.762376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.762405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.762578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.762606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.762795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.762823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.762993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.763020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.763130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.763158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.763337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.763367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.763539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.763568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.763823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.763853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.764036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.764066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.764247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.764277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.764394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.764423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.764667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.764697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.764975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.765005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.765133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.765162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.765303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.765332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.765558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.765586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.765714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.765742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.765867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.765895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.766072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.766101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.766318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.766347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.766472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.766500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.766706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.766735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.766858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.766886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.767073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.767101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.767215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.767257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.767500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.767527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.767780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.767810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.767950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.767978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.768117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.768144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.768436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.768468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.768682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.768717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.768836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.768864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.769049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.769078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.769327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.769357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.769543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.769573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.769750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.769780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.769905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.769933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.770066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.770095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.770199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.770238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.770361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.770389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.770510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.770538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.770788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.770817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.770922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.770950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.771149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.771178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.771299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.771328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.771500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.771529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.771766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.771795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.771985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.772013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.772204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.772253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.772521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.772550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.772730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.772760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.773010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.489 [2024-07-12 19:20:02.773040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.489 qpair failed and we were unable to recover it. 00:28:00.489 [2024-07-12 19:20:02.773163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.773192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.773320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.773351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.773525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.773555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.773735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.773765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.774004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.774033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.774209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.774252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.774498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.774527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.774699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.774728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.774895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.774924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.775098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.775126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.775269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.775300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.775483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.775512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.775701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.775730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.775903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.775933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.776170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.776199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.776325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.776355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.776596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.776626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.776818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.776846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.777039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.777074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.777189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.777218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.777343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.777373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.777561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.777590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.777806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.777835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.778083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.778112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.778255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.778285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.778395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.778424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.778537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.778567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.778685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.778714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.778836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.778865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.778974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.779003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.779099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.779129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.779302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.779332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.779544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.779574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.779681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.779710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.779967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.779997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.780174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.780204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.780416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.780447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.780637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.780666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.780784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.780813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.781088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.781118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.781245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.781276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.781464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.781494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.781682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.781711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.781829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.781858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.781977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.782008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.782249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.782321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.782542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.782576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.782865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.782895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.783025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.783057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.783247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.783279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.783416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.783446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.783683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.783712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.783838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.783868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.784087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.784117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.784369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.784400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.784686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.784716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.784927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.784959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.785083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.785113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.785258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.785298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.785496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.785527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.785730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.785760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.785959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.785989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.786233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.786264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.786447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.786477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.786738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.786767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.786877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.786907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.787096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.787125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.787294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.787325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.787529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.787570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.787752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.787781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.787977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.788007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.788195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.788237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.788357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.788405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.490 qpair failed and we were unable to recover it. 00:28:00.490 [2024-07-12 19:20:02.788596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.490 [2024-07-12 19:20:02.788626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.788861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.788891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.789073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.789102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.789276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.789306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.789419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.789450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.789690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.789720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.789832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.789862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.790041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.790070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.790194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.790223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.790417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.790447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.790652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.790682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.790867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.790897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.791116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.791147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.791341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.791371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.791500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.791530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.791701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.791732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.791919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.791948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.792191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.792221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.792356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.792386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.792488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.792517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.792702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.792732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.792971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.793001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.793171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.793201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.793394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.793424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.793537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.793567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.793691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.793725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.793855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.793885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.794004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.794034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.794209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.794255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.794438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.794468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.794652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.794682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.794878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.794907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.795115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.795144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.795359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.795390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.795513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.795542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.795717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.795746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.795966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.795995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.796183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.796213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.796458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.796487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.796750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.796780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.797019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.797049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.797223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.797261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.797507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.797536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.797639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.797668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.797788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.797817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.798108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.798138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.798272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.798303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.798542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.798572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.798684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.798713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.798828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.798857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.799053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.799083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.799274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.799304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.799455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.799524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.799777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.799811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.799937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.799966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.800146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.800175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.800362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.800394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.800568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.800597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.800717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.800747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.800863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.800893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.801009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.801038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.801211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.801252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.801367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.801397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.801507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.801536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.801722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.801752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.801930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.801959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.802103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.802132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.802323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.802353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.802463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.802492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.802757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.802786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.491 qpair failed and we were unable to recover it. 00:28:00.491 [2024-07-12 19:20:02.802967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.491 [2024-07-12 19:20:02.802997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.803111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.803140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.803266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.803296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.803481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.803511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.803624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.803653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.803772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.803801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.803973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.804002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.804203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.804309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.804444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.804474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.804684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.804718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.804897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.804927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.805201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.805238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.805364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.805394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.805585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.805615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.805793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.805823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.806005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.806034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.806218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.806261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.806374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.806404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.806641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.806670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.806798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.806827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.806997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.807027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.807153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.807182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.807372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.807402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.807516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.807547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.807665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.807694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.807885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.807914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.808109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.808138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.808331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.808362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.808543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.808572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.808694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.808723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.808909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.808938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.809058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.809086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.809354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.809383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.809498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.809528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.809711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.809740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.809945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.809974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.810085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.810113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.810294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.810326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.810509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.810539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.810672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.810701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.810824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.810853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.811117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.811146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.811264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.811293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.811479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.811508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.811642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.811671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.811914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.811943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.812137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.812166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.812360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.812391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.812656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.812686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.812805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.812834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.813026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.813056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.813166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.813195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.813431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.813500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.813637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.813671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.813849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.813878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.814081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.814110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.814285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.814317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.814501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.814530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.814648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.814678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.814808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.814838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.814953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.814982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.815149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.815178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.815397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.815427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.815599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.815629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.815821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.815851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.815955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.815985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.816248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.816279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.816523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.816553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.816724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.816753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.816880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.816910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.817111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.817141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.817256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.817285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.817404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.817433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.817622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.817651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.817787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.817817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.817985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.818015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.818133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.818162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.492 qpair failed and we were unable to recover it. 00:28:00.492 [2024-07-12 19:20:02.818313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.492 [2024-07-12 19:20:02.818344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.818463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.818493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.818657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.818687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.818814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.818843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.819039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.819069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.819301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.819332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.819497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.819527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.819694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.819724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.819933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.819963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.820078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.820107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.820240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.820270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.820450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.820480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.820689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.820719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.820817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.820852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.821028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.821058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.821263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.821294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.821410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.821439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.821625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.821655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.821912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.821943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.822123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.822152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.822337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.822367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.822483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.822512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.822629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.822659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.822789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.822819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.822939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.822968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.823208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.823247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.823362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.823391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.823511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.823541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.823650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.823680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.823943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.823973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.824087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.824116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.824283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.824313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.824486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.824516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.824651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.824681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.824891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.824921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.825107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.825137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.825335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.825366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.825493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.825523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.825632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.825661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.825853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.825883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.826078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.826108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.826242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.826273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.826445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.826475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.826662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.826691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.826796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.826825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.826947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.826976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.827093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.827122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.827260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.827291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.827475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.827505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.827676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.827705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.827899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.827929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.828060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.828090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.828292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.828323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.828504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.828539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.828657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.828687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.828819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.828849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.828967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.828997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.829190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.829219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.829428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.829459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.829724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.829754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.829863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.829893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.830025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.830055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.830301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.830333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.830440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.830469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.830600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.830629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.830800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.830830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.830962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.830991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.831163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.831193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.831386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.831416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.831661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.831690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.831865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.831895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.832025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.832054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.832277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.832307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.832422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.832451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.832574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.832603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.832734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.832763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.493 qpair failed and we were unable to recover it. 00:28:00.493 [2024-07-12 19:20:02.832932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.493 [2024-07-12 19:20:02.832961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.833128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.833157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.833341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.833372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.833566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.833595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.833710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.833741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.833975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.834005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.834123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.834159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.834273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.834304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.834506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.834536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.834705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.834734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.834927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.834956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.835128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.835158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.835261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.835291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.835408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.835437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.835552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.835581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.835698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.835728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.835838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.835867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.836102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.836136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.836376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.836406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.836519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.836549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.836658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.836687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.836801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.836830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.837016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.837045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.837279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.837309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.837478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.837507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.837623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.837651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.837760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.837790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.838028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.838058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.838276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.838308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.838429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.838459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.838713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.838742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.838872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.838903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.839165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.839195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.839405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.839435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.839608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.839638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.839755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.839785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.839898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.839928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.840051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.840080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.840250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.840280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.840446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.840474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.840709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.840738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.840849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.840878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.840992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.841021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.841190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.841220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.841406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.841436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.841617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.841647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.841889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.841919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.842035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.842064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.842199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.842260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.842507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.842536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.842790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.842819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.843089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.843118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.843297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.843328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.843436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.843466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.843642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.843671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.843907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.843936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.844105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.844134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.844338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.844373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.844497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.844527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.844718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.844748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.844929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.844958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.845054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.845083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.845199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.845236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.845350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.845380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.845555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.845584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.845766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.845796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.846005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.846035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.846205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.846243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.846416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.846446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.846758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.494 [2024-07-12 19:20:02.846788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.494 qpair failed and we were unable to recover it. 00:28:00.494 [2024-07-12 19:20:02.846953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.846981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.847222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.847262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.847372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.847401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.847586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.847616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.847787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.847817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.847916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.847945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.848129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.848159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.848324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.848356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.848536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.848565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.848826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.848855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.849035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.849065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.849183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.849212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.849341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.849371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.849604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.849633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.849754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.849784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.850025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.850054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.850268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.850299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.850492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.850522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.850706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.850735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.850920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.850949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.851118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.851147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.851272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.851302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.851474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.851504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.851701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.851731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.851907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.851937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.852066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.852095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.852256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.852287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.852522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.852557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.852670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.852700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.852834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.852864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.853035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.853064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.853181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.853210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.853416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.853446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.853668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.853698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.853948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.853977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.854144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.854173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.854381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.854411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.854578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.854608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.854723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.854752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.854867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.854897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.855008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.855038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.855278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.855309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.855439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.855468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.855650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.855679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.855795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.855824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.855946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.855976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.856144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.856173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.856373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.856403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.856515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.856544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.856668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.856697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.856828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.856858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.857121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.857150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.857406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.857436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.857617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.857647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.857754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.857785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.857901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.857930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.858180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.858210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.858398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.858428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.858551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.858581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.858853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.858882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.859007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.859036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.859272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.859301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.859468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.859497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.859607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.859636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.859773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.859802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.859979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.860008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.860189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.860218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.860425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.860460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.860566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.860595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.860767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.860797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.861055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.861085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.861194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.861223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.861441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.861471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.861742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.861771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.861949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.861978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.862155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.495 [2024-07-12 19:20:02.862184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.495 qpair failed and we were unable to recover it. 00:28:00.495 [2024-07-12 19:20:02.862458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.862488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.862664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.862694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.862928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.862958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.863072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.863102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.863235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.863266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.863449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.863480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.863590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.863620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.863797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.863826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.864008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.864037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.864293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.864324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.864494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.864524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.864642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.864671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.864855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.864884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.865064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.865094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.865217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.865256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.865457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.865486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.865601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.865630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.865813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.865842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.866064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.866094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.866273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.866303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.866561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.866590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.866854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.866883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.867084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.867114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.867303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.867334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.867538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.867567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.867804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.867834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.868012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.868042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.868239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.868270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.868392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.868422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.868589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.868618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.868790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.868820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.868998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.869033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.869201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.869247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.869439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.869469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.869645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.869674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.869920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.869949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.870128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.870156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.870324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.870355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.870543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.870573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.870691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.870720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.870842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.870871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.871006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.871036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.871166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.871196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.871333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.871363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.871621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.871651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.871756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.871785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.871925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.871955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.872137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.872166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.872363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.872395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.872582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.872611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.872797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.872826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.873013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.873043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.873166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.873196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.873342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.873373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.873544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.873574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.873754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.873783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.873989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.874018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.874127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.874156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.874341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.874373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.874605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.874635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.874818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.874848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.875033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.875063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.875174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.875204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.875382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.875412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.875545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.875575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.875815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.875845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.875948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.875977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.876248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.876278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.876464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.876494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.496 qpair failed and we were unable to recover it. 00:28:00.496 [2024-07-12 19:20:02.876612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.496 [2024-07-12 19:20:02.876641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.876823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.876853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.877017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.877051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.877270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.877301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.877536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.877565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.877848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.877877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.878006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.878035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.878155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.878184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.878461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.878491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.878616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.878646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.878840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.878868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.879053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.879083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.879208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.879248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.879462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.879492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.879677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.879707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.879814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.879843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.880030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.880060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.880295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.880326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.880592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.880621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.880804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.880834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.881006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.881035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.881247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.881278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.881453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.881481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.881677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.881707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.881820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.881850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.882051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.882082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.882203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.882238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.882442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.882471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.882583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.882613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.882867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.882936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.883141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.883175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.883327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.883359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.883623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.883652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.883880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.883909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.884140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.884169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.884481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.884511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.884641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.884670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.884791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.884821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.885009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.885037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.885132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.885160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.885277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.885308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.885482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.885511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.885692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.885720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.885928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.885959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.886059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.886088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.886322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.886353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.886592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.886621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.886730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.886760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.886947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.886977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.887155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.887183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.887317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.887347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.887451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.887480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.887688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.887717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.887852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.887881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.888003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.888032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.888215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.888254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.888434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.888469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.888652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.888681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.888805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.888834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.889008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.889037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.889269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.889300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.889469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.889498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.889666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.889696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.889806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.889835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.497 [2024-07-12 19:20:02.890071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.497 [2024-07-12 19:20:02.890099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.497 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.890211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.890249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.890368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.890397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.890597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.890626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.890863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.890892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.891069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.891098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.891284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.891314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.891433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.891461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.891648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.891676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.891841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.891870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.891980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.892009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.892136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.892165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.892401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.892431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.892604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.892632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.892814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.892842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.893013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.893042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.893162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.893190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.893478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.893509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.893675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.893703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.893868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.893897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.894070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.894099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.894270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.894301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.894485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.894514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.894680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.894709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.894892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.894921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.895036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.895064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.895238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.895269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.895444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.895472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.895593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.895623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.895736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.895764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.895951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.895980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.896146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.896175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.896363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.896392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.896646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.896716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.896911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.896944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.897124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.897154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.897326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.897358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.897625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.897655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.897775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.897804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.897970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.897999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.898261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.898292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.898463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.898493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.898616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.898646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.898923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.898953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.899133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.899162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.899350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.899382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.899564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.899602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.899783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.899813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.899936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.899966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.900162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.900192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.900413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.900443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.900548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.900577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.900762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.900791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.901052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.901081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.901266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.901297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.901543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.901572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.901806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.901835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.901948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.901977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.902212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.902250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.902437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.902466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.902735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.902764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.903021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.903051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.903178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.903208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.498 [2024-07-12 19:20:02.903342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.498 [2024-07-12 19:20:02.903373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.498 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.903543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.903572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.903753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.903782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.904006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.904036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.904245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.904276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.904538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.904569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.904690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.904718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.904849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.904878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.905111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.905140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.905372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.905402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.905593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.905623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.905855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.905885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.906133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.906162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.906423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.906453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.906569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.906598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.906835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.906865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.907103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.907133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.907367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.907397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.907521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.907551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.907785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.907814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.907928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.907957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.908144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.908174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.908351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.908381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.908617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.908653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.908834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.908864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.909046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.909075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.909258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.909289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.909409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.909438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.909612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.909641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.909824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.909853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.910018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.910048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.910275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.910305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.910562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.910592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.910860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.910890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.911089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.911118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.911260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.911291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.911487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.911517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.911649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.911679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.911937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.911967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.912161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.912190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.912505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.912536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.912779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.912809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.912986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.913015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.913198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.913237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.913428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.913458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.913727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.913756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.913988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.914017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.914219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.914258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.914499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.914529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.914662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.914691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.914821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.914855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.915020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.915049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.915172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.915201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.915430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.915461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.915569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.915598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.915714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.915743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.915907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.499 [2024-07-12 19:20:02.915936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.499 qpair failed and we were unable to recover it. 00:28:00.499 [2024-07-12 19:20:02.916194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.916234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.916355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.916385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.916625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.916654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.916768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.916797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.916980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.917009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.917186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.917215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.917397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.917428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.917623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.917653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.917837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.917866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.918142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.918171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.918414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.918445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.918549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.918578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.918689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.918718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.918841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.918870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.919070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.919100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.919280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.919310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.919427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.919457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.919644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.919674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.919838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.919867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.920074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.920103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.920364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.920395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.920645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.920675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.920867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.920896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.921023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.921052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.921294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.921324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.921523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.921553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.921721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.921750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.921926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.921955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.922217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.922253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.922512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.922542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.922721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.922751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.922919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.922948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.923076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.923106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.923287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.923328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.923540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.923569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.923704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.923733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.923990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.924020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.924186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.924216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.924351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.924381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.924547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.924576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.924777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.924807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.924927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.924956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.925073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.925102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.925269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.925299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.925423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.925452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.925636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.925665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.925763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.925790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.925979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.926009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.926124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.926154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.926366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.926396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.926577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.926606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.926814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.926844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.926973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.927002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.927113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.927142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.927252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.927282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.927494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.927523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.927647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.927677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.927794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.500 [2024-07-12 19:20:02.927823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.500 qpair failed and we were unable to recover it. 00:28:00.500 [2024-07-12 19:20:02.928002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.928031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.928205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.928241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.928432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.928462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.928565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.928594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.928851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.928881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.929050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.929079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.929191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.929220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.929472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.929503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.929603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.929632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.929760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.929790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.930045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.930074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.930278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.930308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.930543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.930573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.930755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.930784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.930972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.931001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.931185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.931220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.931434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.931465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.931635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.931664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.931919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.931948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.932076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.932104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.932273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.932303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.932414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.932443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.932643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.932673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.932910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.932940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.933108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.933137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.933344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.933374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.933472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.933501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.933619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.933648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.933770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.933799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.934063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.934093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.934197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.934233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.934477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.934506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.934738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.934768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.935013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.935042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.935247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.935277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.935454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.935484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.935727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.935756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.936014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.936043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.936223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.936265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.936530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.936559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.936670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.936700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.936829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.936858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.937051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.937081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.937207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.937246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.937488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.937517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.937772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.937802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.937986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.938015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.938131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.938160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.938329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.938359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.938526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.938556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.938762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.938790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.938969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.938999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.939262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.939293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.939552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.939582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.501 qpair failed and we were unable to recover it. 00:28:00.501 [2024-07-12 19:20:02.939781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.501 [2024-07-12 19:20:02.939810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.939978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.940012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.940128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.940157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.940270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.940299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.940466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.940495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.940605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.940634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.940817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.940846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.940975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.941004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.941171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.941200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.941387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.941455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.941577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.941610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.941783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.941814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.941941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.941971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.942204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.942245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.942432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.942462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.942677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.942708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.942878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.942907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.943092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.943121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.943305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.943336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.943505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.943535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.943807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.943836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.944014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.944044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.944155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.944184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.944445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.944475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.944658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.944688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.944944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.944975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.945235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.945267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.945454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.945483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.945688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.945718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.945902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.945932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.946116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.946145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.946258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.946287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.946415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.946445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.946632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.946663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.946861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.946890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.947010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.947038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.947146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.947176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.947356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.947388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.947598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.947628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.947768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.947797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.948057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.948087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.948207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.948253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.948437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.948467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.948644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.948674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.948908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.948937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.949049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.949078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.949184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.949214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.949324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.949355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.949638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.949668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.949768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.949798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.949963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.949992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.950170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.950200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.950396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.950427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.950556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.950585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.502 [2024-07-12 19:20:02.950788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.502 [2024-07-12 19:20:02.950818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.502 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.951014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.951043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.951212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.951251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.951513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.951542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.951720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.951749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.951916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.951946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.952123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.952152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.952351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.952382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.952550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.952579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.952818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.952847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.953030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.953059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.953242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.953272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.953447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.953477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.953710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.953738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.953998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.954028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.954211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.954248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.954435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.954464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.954663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.954692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.954903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.954932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.955191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.955221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.955440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.955470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.955650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.955679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.955846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.955876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.956131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.956160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.956282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.956313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.956485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.956513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.956748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.956778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.956947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.956981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.957165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.957195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.957381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.957411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.957530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.957559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.957693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.957722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.957908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.957937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.958108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.958137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.958308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.958339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.958515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.958544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.958783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.958813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.958990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.959019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.959203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.959249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.959534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.959564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.959737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.959766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.959896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.959925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.960185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.960215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.960360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.960390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.960500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.960530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.960710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.960739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.961004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.961034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.961171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.961201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.961385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.961415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.961519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.961548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.961825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.961855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.962031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.962061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.962248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.962278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.962407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.962437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.962656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.962686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.962862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.962892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.963017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.963046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.963287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.963316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.963422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.963451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.963563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.963593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.503 qpair failed and we were unable to recover it. 00:28:00.503 [2024-07-12 19:20:02.963715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.503 [2024-07-12 19:20:02.963744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.963863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.963892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.964078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.964107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.964344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.964373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.964540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.964570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.964772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.964801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.964913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.964943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.965186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.965221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.965415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.965445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.965682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.965712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.965892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.965921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.966090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.966119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.966373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.966403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.966660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.966689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.966922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.966952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.967137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.967166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.967373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.967404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.967657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.967686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.967871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.967900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.968019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.968049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.968282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.968311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.968604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.968634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.968825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.968855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.969055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.969084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.969212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.969249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.969503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.969533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.969776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.969806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.969984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.970013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.970248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.970278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.970505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.970535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.970660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.970689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.970805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.970834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.971017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.971046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.971307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.971337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.971519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.971549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.971833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.971862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.972033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.972063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.972236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.972266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.972384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.972414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.972672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.972702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.972811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.972840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.973077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.973106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.973383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.973414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.973599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.973628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.973808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.973838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.974101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.974130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.974264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.974295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.974559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.974594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.974732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.974761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.974956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.974985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.975096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.975125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.975311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.975341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.975587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.975617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.975797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.975826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.976079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.504 [2024-07-12 19:20:02.976108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.504 qpair failed and we were unable to recover it. 00:28:00.504 [2024-07-12 19:20:02.976235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.976265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.976445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.976474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.976659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.976689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.976855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.976885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.976994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.977024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.977262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.977293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.977413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.977442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.977624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.977653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.977859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.977889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.977992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.978022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.978281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.978311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.978488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.978518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.978757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.978787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.978900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.978929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.979181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.979210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.979516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.979547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.979726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.979756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.979882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.979911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.980103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.980132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.980275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.980306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.980478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.980507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.980642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.980671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.980784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.980812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.981095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.981124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.981249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.981279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.981405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.981434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.981667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.981696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.981820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.981849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.982022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.982051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.982163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.982192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.982371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.982401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.982589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.982619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.982828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.982862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.983107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.983136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.983367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.983398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.983581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.983611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.983849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.983879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.984138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.984167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.984334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.984364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.984475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.984504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.984767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.984796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.985056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.985085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.985257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.985286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.985470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.985500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.985762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.985791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.985924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.985952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.986192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.986222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.986395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.986424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.986624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.986655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.986844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.986874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.987055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.987084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.987283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.987313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.987493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.987522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.987616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.987645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.987758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.505 [2024-07-12 19:20:02.987788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.505 qpair failed and we were unable to recover it. 00:28:00.505 [2024-07-12 19:20:02.987966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.987995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.988098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.988127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.988358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.988388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.988502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.988531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.988712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.988743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.988864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.988892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.989058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.989087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.989352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.989381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.989571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.989601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.989813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.989842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.990017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.990046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.990232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.990263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.990445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.990474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.990711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.990741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.990916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.990945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.991058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.991087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.991295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.991326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.991499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.991533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.991728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.991758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.991938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.991967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.992217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.992255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.992487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.992517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.992701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.992730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.992992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.993021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.993236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.993266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.993376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.993405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.993595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.993624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.993806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.993836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.993947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.993977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.994159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.994189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.994380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.994410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.994530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.994560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.994724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.994753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.994860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.994889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.995152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.995181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.995377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.995408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.995536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.995565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.995799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.995829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.996003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.996033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.996269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.996299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.996466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.996496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.996598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.996628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.996813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.996842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.997038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.997067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.997333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.997364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.997527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.997557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.997676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.997706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.997828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.997857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.997974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.998004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.998195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.998232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.998401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.998430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.998594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.998623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.998795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.998824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.999004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.999033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.999146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.999175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.999494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.999525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.999709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.999737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:02.999870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:02.999903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:03.000039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:03.000068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.506 [2024-07-12 19:20:03.000299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.506 [2024-07-12 19:20:03.000329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.506 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.000452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.000482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.000686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.000715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.000920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.000950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.001182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.001211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.001413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.001443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.001620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.001649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.001922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.001951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.002124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.002153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.002335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.002366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.002550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.002579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.002781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.002810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.002921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.002951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.003151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.003180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.003464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.003495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.003623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.003652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.003853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.003882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.004060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.004090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.004198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.004238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.004354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.004384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.004576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.004605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.004777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.004807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.004995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.005024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.005143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.005172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.005479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.005510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.005741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.005810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.006047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.006080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.006294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.006328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.006458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.006488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.006680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.006710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.006895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.006924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.007099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.007128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.007325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.007357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.007618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.007649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.007822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.007851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.007967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.007996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.008200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.008240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.008479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.008509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.008678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.008707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.008835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.008865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.008990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.009020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.009135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.009164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.009349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.009379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.009552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.009582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.009699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.009728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.009844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.009873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.010045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.010074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.010249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.010279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.010606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.010636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.010821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.010850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.011034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.011063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.011243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.011274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.011460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.011494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.011730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.011761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.011930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.011960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.507 qpair failed and we were unable to recover it. 00:28:00.507 [2024-07-12 19:20:03.012147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.507 [2024-07-12 19:20:03.012176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.012440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.012471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.012641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.012671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.012904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.012934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.013120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.013150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.013256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.013285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.013472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.013501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.013785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.013815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.013924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.013953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.014075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.014104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.014279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.014309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.014526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.014556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.014795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.014824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.015012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.015041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.015252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.015283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.015411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.015440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.015639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.015669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.015859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.015889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.016022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.016051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.016171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.016200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.016366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.016396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.016516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.016545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.016718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.016748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.016961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.016990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.017103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.017132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.017326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.017356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.017457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.017486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.017586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.017615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.017744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.017773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.018005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.018034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.018310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.018340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.018472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.018501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.018691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.018720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.018826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.018855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.019094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.019123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.019249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.019279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.019490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.019520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.019633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.019662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.019821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.019890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.020086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.020119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.020358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.020390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.020574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.020604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.020780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.020810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.020986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.021016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.021137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.021167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.021401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.021432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.021703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.021732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.021914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.021943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.022134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.022164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.022361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.022391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.022572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.022601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.022737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.022766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.022942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.022972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.023089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.023119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.023238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.023270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.508 [2024-07-12 19:20:03.023445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.508 [2024-07-12 19:20:03.023475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.508 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.023753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.023782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.023959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.023988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.024102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.024132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.024325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.024356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.024537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.024566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.024748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.024777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.024953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.024982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.025151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.025181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.025465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.025495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.025629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.025660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.025837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.025867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.026102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.026131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.026408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.026439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.026633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.026663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.026863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.026892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.027131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.027160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.027396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.027427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.027545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.027574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.027692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.027722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.027896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.027925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.028096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.028125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.028296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.028327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.028563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.028597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.028778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.028808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.028926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.028956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.029156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.029186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.029442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.029473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.029594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.029624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.029792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.029822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.030026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.030056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.030244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.030274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.030395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.030425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.030635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.030665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.030857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.030886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.031123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.031152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.031340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.031370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.031552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.031582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.031682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.031711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.031885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.031914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.032034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.032064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.032250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.032282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.032461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.032491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.032752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.032782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.032925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.032954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.033085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.033114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.033237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.033268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.033521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.033549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.033734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.033763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.034021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.034050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.034169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.034198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.034322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.034353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.034532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.034562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.034690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.034719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.034927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.034956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.035080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.035109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.035210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.035247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.035436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.509 [2024-07-12 19:20:03.035466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.509 qpair failed and we were unable to recover it. 00:28:00.509 [2024-07-12 19:20:03.035584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.035613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.035803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.035832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.036013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.036042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.036214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.036259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.036435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.036465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.036590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.036625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.036799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.036828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.037025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.037055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.037164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.037193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.037320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.037350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.037580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.037610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.037802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.037832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.038011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.038040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.038206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.038246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.038369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.038399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.038569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.038599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.038723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.038752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.038920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.038950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.510 [2024-07-12 19:20:03.039127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.510 [2024-07-12 19:20:03.039157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.510 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.039340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.039373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.039501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.039532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.039649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.039680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.039866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.039896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.040029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.040058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.040243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.040273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.040474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.040504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.040765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.040794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.040982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.041011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.041137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.041166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.041339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.041370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.041485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.041514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.041647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.041676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.041874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.041904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.042084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.042114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.042402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.042432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.042548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.042578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.042760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.042789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.042895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.042924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.043162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.043191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.043337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.043368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.043538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.043567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.043696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.043725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.043848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.043877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.043983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.044012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.795 [2024-07-12 19:20:03.044182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.795 [2024-07-12 19:20:03.044212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.795 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.044461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.044496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.044678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.044707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.044898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.044928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.045119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.045147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.045331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.045361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.045539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.045568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.045675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.045705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.045812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.045840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.045969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.045998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.046109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.046139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.046243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.046273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.046446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.046476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.046661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.046691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.046859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.046889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.047178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.047209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.047407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.047438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.047608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.047637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.047756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.047785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.047973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.048002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.048169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.048199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.048334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.048365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.048479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.048508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.048692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.048721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.048830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.048859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.048964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.048993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.049104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.049134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.049261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.049292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.049421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.049452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.049584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.049614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.049715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.796 [2024-07-12 19:20:03.049745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.796 qpair failed and we were unable to recover it. 00:28:00.796 [2024-07-12 19:20:03.050006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.050036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.050143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.050171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.050372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.050403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.050572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.050601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.050768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.050797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.051064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.051093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.051207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.051264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.051449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.051479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.051649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.051678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.051853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.051882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.052121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.052155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.052313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.052342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.052534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.052564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.052671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.052700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.052897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.052925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.053163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.053192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.053382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.053413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.053523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.053553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.053742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.053773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.053946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.053975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.054192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.054221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.054343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.054374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.054617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.054647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.054910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.054940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.055061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.055091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.055237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.055267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.055437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.055470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.055707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.055737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.055925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.055954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.056142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.056171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.056289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.056319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.056560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.056590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.056760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.797 [2024-07-12 19:20:03.056790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.797 qpair failed and we were unable to recover it. 00:28:00.797 [2024-07-12 19:20:03.056987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.057019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.057150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.057180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.057309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.057340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.057586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.057616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.057819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.057849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.058014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.058044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.058329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.058363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.058485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.058515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.058766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.058796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.058979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.059009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.059129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.059160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.059267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.059298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.059410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.059439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.059555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.059584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.059703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.059732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.059901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.059930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.060111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.060141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.060321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.060357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.060527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.060556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.060675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.060705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.060876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.060910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.061105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.061134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.061253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.061283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.061460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.061489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.061618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.061647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.061818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.061846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.061963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.061992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.062166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.062196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.062332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.798 [2024-07-12 19:20:03.062362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.798 qpair failed and we were unable to recover it. 00:28:00.798 [2024-07-12 19:20:03.062534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.062564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.062684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.062714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.062897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.062928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.063036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.063066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.063265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.063300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.063399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.063429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.063544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.063573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.063683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.063715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.063983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.064012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.064118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.064147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.064332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.064362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.064548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.064578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.064837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.064867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.064976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.065005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.065176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.065206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.065355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.065385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.065558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.065586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.065709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.065738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.065909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.065939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.066125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.066154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.066344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.066374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.066477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.066506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.066620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.066650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.066767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.066796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.066917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.066947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.067054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.067083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.067201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.799 [2024-07-12 19:20:03.067239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.799 qpair failed and we were unable to recover it. 00:28:00.799 [2024-07-12 19:20:03.067346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.067376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.067551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.067586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.067712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.067741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.067910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.067939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.068042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.068072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.068202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.068240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.068349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.068378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.068483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.068513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.068743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.068774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.068881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.068911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.069077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.069107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.069276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.069306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.069481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.069511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.069696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.069726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.069843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.069873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.069987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.070017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.070198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.070238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.070360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.070390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.070488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.070518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.070638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.070667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.070779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.070809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.070925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.070955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.071056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.071086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.800 [2024-07-12 19:20:03.071188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.800 [2024-07-12 19:20:03.071218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.800 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.071422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.071452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.071553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.071583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.071760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.071790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.071985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.072014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.072147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.072178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.072299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.072335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.072471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.072500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.072612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.072641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.072812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.072842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.072952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.072982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.073093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.073122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.073361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.073392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.073524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.073553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.073731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.073761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.073944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.073974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.074079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.074108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.074212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.074251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.074361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.074397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.074504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.074534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.074702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.074732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.074837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.074867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.075118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.075148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.075268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.075299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.075409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.075438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.075556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.075586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.075782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.075811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.075941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.075971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.076091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.076121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.076244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.076274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.801 [2024-07-12 19:20:03.076416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.801 [2024-07-12 19:20:03.076445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.801 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.076700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.076729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.076902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.076932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.077046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.077075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.077245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.077275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.077456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.077485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.077671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.077701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.077818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.077847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.077960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.077990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.078191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.078221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.078358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.078388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.078502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.078531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.078644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.078674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.078804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.078833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.079002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.079031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.079202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.079285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.079431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.079465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.079574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.079606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.079802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.079832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.080004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.080034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.080153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.080184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.080409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.080439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.080621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.080652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.080763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.080793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.080926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.080955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.081123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.081152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.081279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.081309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.081434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.081463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.081634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.081677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.802 [2024-07-12 19:20:03.081847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.802 [2024-07-12 19:20:03.081876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.802 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.081980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.082008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.082275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.082306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.082427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.082457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.082694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.082724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.082829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.082859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.083024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.083053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.083194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.083223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.083350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.083382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.083501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.083529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.083648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.083676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.083853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.083881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.084065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.084094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.084208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.084249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.084428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.084458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.084637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.084667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.084879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.084909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.085084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.085114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.085242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.085274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.085441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.085470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.085585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.085613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.085737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.085773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.085959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.085988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.086175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.086206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.086396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.086427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.086553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.086582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.086704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.086735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.086906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.086935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.087042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.087071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.803 [2024-07-12 19:20:03.087247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.803 [2024-07-12 19:20:03.087277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.803 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.087448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.087478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.087656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.087686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.087857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.087886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.087987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.088016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.088252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.088283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.088456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.088486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.088597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.088627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.088795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.088824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.088941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.088971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.089155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.089189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.089305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.089335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.089446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.089476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.089650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.089680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.089888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.089918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.090038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.090067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.090169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.090198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.090326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.090361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.090541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.090572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.090761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.090792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.090909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.090938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.091075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.091107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.091262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.091293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.091402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.091432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.091619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.091653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.091772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.091803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.091918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.091948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.092114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.092143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.804 qpair failed and we were unable to recover it. 00:28:00.804 [2024-07-12 19:20:03.092311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.804 [2024-07-12 19:20:03.092341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.092458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.092487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.092673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.092705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.092816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.092845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.093080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.093110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.093237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.093267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.093374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.093404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.093575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.093604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.093709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.093738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.093984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.094052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.094243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.094276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.094401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.094431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.094554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.094584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.094817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.094847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.094951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.094981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.095084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.095113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.095234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.095266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.095391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.095421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.095596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.095625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.095745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.095775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.095944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.095973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.096165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.096194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.096308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.096345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.096460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.096489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.096681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.096711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.096898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.096928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.097043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.097073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.097316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.097348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.805 [2024-07-12 19:20:03.097565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.805 [2024-07-12 19:20:03.097594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.805 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.097801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.097831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.097995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.098025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.098128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.098158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.098279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.098308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.098521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.098550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.098684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.098715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.098887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.098917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.099109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.099139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.099252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.099282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.099388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.099417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.099630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.099660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.099835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.099863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.099968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.099997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.100126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.100157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.100352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.100383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.100561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.100592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.100768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.100799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.100966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.100996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.101109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.101138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.101311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.101342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.101624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.101692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.101886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.101920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.102118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.102148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.102280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.102311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.102418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.102448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.102631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.806 [2024-07-12 19:20:03.102661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.806 qpair failed and we were unable to recover it. 00:28:00.806 [2024-07-12 19:20:03.102773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.102803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.103088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.103118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.103218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.103260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.103515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.103545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.103723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.103753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.103933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.103962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.104157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.104187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.104300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.104330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.104458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.104488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.104608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.104637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.104811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.104840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.105008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.105038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.105204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.105244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.105429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.105459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.105634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.105663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.105779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.105808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.106055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.106084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.106272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.106303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.106417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.106446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.106683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.106712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.106897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.106926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.107112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.107146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.107326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.107355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.107537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.107566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.107737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.107766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.107941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.107970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.108219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.108256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.108558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.108588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.108845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.108875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.109041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.109071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.109173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.807 [2024-07-12 19:20:03.109202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.807 qpair failed and we were unable to recover it. 00:28:00.807 [2024-07-12 19:20:03.109408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.109438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.109555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.109584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.109765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.109794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.109921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.109951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.110057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.110087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.110211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.110252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.110427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.110456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.110560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.110591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.110849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.110878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.111144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.111174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.111357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.111387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.111496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.111525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.111656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.111685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.111941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.111970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.112156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.112185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.112323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.112354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.112468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.112497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.112665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.112694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.112866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.112896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.113083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.113112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.113222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.113262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.113392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.113421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.113681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.113712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.113840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.113869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.113988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.114018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.114257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.114288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.114499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.114528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.114701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.114731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.114841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.114869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.114979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.808 [2024-07-12 19:20:03.115008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.808 qpair failed and we were unable to recover it. 00:28:00.808 [2024-07-12 19:20:03.115129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.115158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.115325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.115396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.115589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.115622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.115744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.115775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.115961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.115991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.116193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.116236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.116355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.116386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.116558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.116587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.116826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.116856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.117027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.117056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.117165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.117195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.117380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.117412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.117531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.117561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.117743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.117772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.118008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.118037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.118235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.118267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.118450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.118479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.118676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.118706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.118978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.119008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.119194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.119237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.119408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.119438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.119642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.119671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.119788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.119817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.120056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.120085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.120255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.120285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.120483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.120513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.120685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.120714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.120826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.120855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.120974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.121003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.121205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.121243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.121355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.121384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.809 qpair failed and we were unable to recover it. 00:28:00.809 [2024-07-12 19:20:03.121494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.809 [2024-07-12 19:20:03.121523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.121705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.121735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.121859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.121889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.122065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.122094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.122321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.122351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.122624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.122653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.122838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.122867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.123100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.123129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.123303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.123333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.123570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.123600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.123875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.123909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.124185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.124215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.124467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.124497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.124617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.124646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.124747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.124776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.124978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.125007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.125127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.125157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.125273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.125303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.125561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.125590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.125792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.125822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.125943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.125972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.126139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.126168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.126398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.126428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.126543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.126573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.126831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.126861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.126979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.127008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.127212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.127251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.127463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.127493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.127664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.127694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.127805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.127834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.127960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.127989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.128167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.128197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.128389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.128419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.128518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.810 [2024-07-12 19:20:03.128548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.810 qpair failed and we were unable to recover it. 00:28:00.810 [2024-07-12 19:20:03.128717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.128747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.128860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.128890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.129004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.129034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.129255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.129287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.129454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.129484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.129655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.129684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.129808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.129838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.130092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.130122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.130334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.130365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.130601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.130630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.130756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.130785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.131048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.131077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.131192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.131220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.131372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.131402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.131526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.131556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.131678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.131707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.131872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.131907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.132057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.132086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.132202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.132239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.132451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.132481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.132587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.132617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.132795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.132825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.132931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.132960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.133223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.133264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.133541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.133570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.133748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.133777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.133948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.133977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.134095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.134125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.134257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.134287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.134398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.134427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.134663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.134694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.134872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.134900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.135136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.135165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.135288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.135318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.135497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.135526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.135630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.135659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.135896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.135925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.811 [2024-07-12 19:20:03.136111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.811 [2024-07-12 19:20:03.136140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.811 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.136274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.136305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.136528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.136558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.136728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.136757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.136871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.136900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.137207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.137247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.137400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.137430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.137545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.137575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.137687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.137717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.137900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.137928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.138096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.138125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.138299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.138330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.138453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.138483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.138658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.138687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.138852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.138881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.139083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.139112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.139246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.139276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.139394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.139424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.139593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.139622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.139813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.139847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.139971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.139999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.140126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.140155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.140268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.140298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.140469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.140499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.140675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.140703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.140817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.140847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.140964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.140993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.812 [2024-07-12 19:20:03.141111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.812 [2024-07-12 19:20:03.141140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.812 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.141309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.141339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.141517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.141546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.141709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.141738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.141906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.141935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.142107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.142136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.142250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.142282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.142398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.142427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.142525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.142555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.142696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.142725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.142927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.142956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.143077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.143106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.143206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.143243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.143364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.143394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.143501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.143530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.143661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.143690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.143808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.143837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.143971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.144000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.144098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.144128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.144262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.144293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.144404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.144434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.144609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.144638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.144740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.144769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.144935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.144964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.145131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.145159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.145333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.145363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.145531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.145560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.145687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.145716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.145823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.145853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.146026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.146056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.146241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.146272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.146384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.146414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.146666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.146701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.146812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.146841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.147021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.147050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.147286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.147316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.813 qpair failed and we were unable to recover it. 00:28:00.813 [2024-07-12 19:20:03.147506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.813 [2024-07-12 19:20:03.147537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.147643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.147672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.147771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.147800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.147898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.147927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.148111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.148140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.148320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.148351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.148467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.148496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.148612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.148641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.148821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.148851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.149033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.149062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.149252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.149283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.149412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.149441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.149625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.149654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.149821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.149850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.149961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.149990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.150111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.150140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.150257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.150287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.150463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.150492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.150602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.150631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.150735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.150763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.150875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.150903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.151095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.151125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.151293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.151323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.151533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.151562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.151672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.151700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.151874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.151904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.152123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.152152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.152337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.152367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.152580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.152608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.152790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.152819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.152926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.152956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.153063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.814 [2024-07-12 19:20:03.153092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.814 qpair failed and we were unable to recover it. 00:28:00.814 [2024-07-12 19:20:03.153301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.153331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.153447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.153478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.153665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.153695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.153878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.153907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.154009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.154047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.154166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.154195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.154380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.154410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.154585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.154614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.154714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.154744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.154850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.154878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.155009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.155040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.155209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.155250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.155455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.155485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.155599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.155627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.155733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.155762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.155875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.155904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.156089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.156118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.156307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.156338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.156517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.156548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.156721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.156749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.156851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.156879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.157000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.157029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.157137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.157165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.157291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.157321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.157533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.157562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.157673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.157701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.157825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.157854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.158065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.158095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.158232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.158262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.158444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.158472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.158670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.158699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.158830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.158860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.159031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.159060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.159164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.815 [2024-07-12 19:20:03.159193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.815 qpair failed and we were unable to recover it. 00:28:00.815 [2024-07-12 19:20:03.159378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.159409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.159594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.159623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.159738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.159766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.159934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.159963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.160082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.160112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.160326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.160356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.160477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.160505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.160678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.160708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.160818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.160846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.160960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.160989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.161092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.161126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.161328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.161358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.161532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.161560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.161670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.161699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.161892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.161921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.162040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.162068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.162245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.162277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.162383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.162411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.162525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.162554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.162737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.162766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.162976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.163005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.163131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.163160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.163269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.163298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.163479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.163508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.163747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.163777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.163966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.163996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.164161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.164189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.164372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.164402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.164507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.164536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.164638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.164666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.164842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.164871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.165038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.165067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.165242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.165271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.165366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.165394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.165635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.165665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.165901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.165930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.816 [2024-07-12 19:20:03.166056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.816 [2024-07-12 19:20:03.166086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.816 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.166203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.166241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.166422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.166451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.166622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.166651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.166752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.166781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.166893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.166921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.167091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.167121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.167265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.167295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.167414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.167442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.167610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.167639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.167762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.167790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.167895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.167923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.168041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.168069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.168198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.168235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.168428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.168463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.168578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.168607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.168850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.168878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.168997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.169025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.169130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.169160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.169276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.169306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.169485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.169515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.169629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.169658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.169776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.169805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.169908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.169937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.170123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.170152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.170319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.170348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.817 [2024-07-12 19:20:03.170541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.817 [2024-07-12 19:20:03.170568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.817 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.170680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.170708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.170821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.170849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.170952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.170980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.171156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.171186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.171312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.171342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.171462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.171490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.171593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.171621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.171725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.171753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.171917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.171946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.172068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.172097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.172221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.172268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.172369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.172397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.172587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.172615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.172748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.172778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.172991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.173059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.173274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.173311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.173442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.173473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.173662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.173692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.173792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.173822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.173995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.174025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.174145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.174174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.174355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.174386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.174499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.174528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.174647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.174676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.174863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.174892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.175064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.175093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.175218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.175259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.175374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.175412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.175580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.175610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.175719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.175748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.175882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.175911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.176007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.176036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.176146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.176175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.176285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.818 [2024-07-12 19:20:03.176315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.818 qpair failed and we were unable to recover it. 00:28:00.818 [2024-07-12 19:20:03.176423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.176453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.176689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.176719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.176975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.177004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.177193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.177235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.177425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.177455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.177659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.177688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.177787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.177817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.178061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.178091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.178259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.178290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.178485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.178515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.178694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.178724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.178908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.178938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.179056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.179085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.179211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.179248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.179427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.179457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.179697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.179727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.179906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.179935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.180179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.180209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.180346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.180376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.180609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.180639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.180816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.180846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.180966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.180995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.181098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.181127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.181243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.181274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.181551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.181581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.181754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.181783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.181890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.181919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.182025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.182055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.182169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.182199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.182385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.182414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.182583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.182613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.182798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.182828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.819 qpair failed and we were unable to recover it. 00:28:00.819 [2024-07-12 19:20:03.182928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.819 [2024-07-12 19:20:03.182957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.183264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.183300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.183507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.183537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.183726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.183755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.183867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.183897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.184071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.184101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.184285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.184315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.184437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.184466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.184629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.184658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.184843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.184873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.184996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.185025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.185135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.185165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.185278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.185309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.185417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.185445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.185706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.185735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.185975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.186005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.186244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.186274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.186408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.186437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.186604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.186634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.186804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.186833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.187011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.187041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.187241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.187272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.187450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.187480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.187682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.187712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.187837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.187866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.187985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.188015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.188197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.188236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.188416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.188445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.188635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.188666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.188788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.188817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.189016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.189045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.189211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.189252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.189378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.820 [2024-07-12 19:20:03.189407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.820 qpair failed and we were unable to recover it. 00:28:00.820 [2024-07-12 19:20:03.189651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.189680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.189886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.189915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.190152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.190181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.190316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.190347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.190474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.190502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.190613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.190642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.190890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.190919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.191111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.191140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.191391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.191427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.191663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.191693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.191791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.191820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.191928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.191956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.192204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.192242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.192502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.192532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.192657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.192686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.192796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.192826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.193068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.193098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.193213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.193268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.193387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.193416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.193627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.193656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.193785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.193815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.194016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.194046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.194313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.194344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.194511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.194541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.194666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.194696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.194794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.194824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.194952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.194981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.195158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.195187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.195324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.195354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.195545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.195574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.195739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.195768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.195881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.195910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.821 qpair failed and we were unable to recover it. 00:28:00.821 [2024-07-12 19:20:03.196091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.821 [2024-07-12 19:20:03.196121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.196303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.196334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.196579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.196608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.196794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.196825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.196994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.197023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.197279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.197310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.197411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.197440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.197620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.197649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.197763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.197792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.197907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.197937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.198060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.198089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.198278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.198309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.198426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.198455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.198589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.198618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.198823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.198853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.198986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.199014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.199304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.199340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.199466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.199495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.199615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.199644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.822 [2024-07-12 19:20:03.199812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.822 [2024-07-12 19:20:03.199841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.822 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.200098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.200127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.200240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.200271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.200399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.200428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.200609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.200638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.200823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.200852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.201040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.201070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.201342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.201373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.201507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.201537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.201703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.201733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.201990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.202020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.202143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.202173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.202306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.202336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.202505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.202534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.202719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.202748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.202858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.202887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.203002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.203031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.203156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.203185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.203381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.203410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.203524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.203554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.203791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.203821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.203941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.203971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.204086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.204115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.204298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.204328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.204526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.204557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.204676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.204705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.824 [2024-07-12 19:20:03.204822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.824 [2024-07-12 19:20:03.204851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.824 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.204956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.204986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.205172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.205201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.205320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.205350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.205471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.205500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.205664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.205693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.205819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.205849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.205967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.205996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.206178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.206207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.206331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.206362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.206464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.206493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.206614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.206649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.206752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.206781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.206890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.206919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.207193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.207223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.207338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.207367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.207478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.207507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.207623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.207652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.207916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.207946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.208060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.208089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.208190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.208219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.208340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.208370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.208548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.208577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.208825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.208854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.209088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.209117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.209287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.209319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.209490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.209519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.209640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.209669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.209770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.209800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.210001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.210030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.210213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.210250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.210418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.210447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.210545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.210575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.210699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.210729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.825 [2024-07-12 19:20:03.210847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.825 [2024-07-12 19:20:03.210877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.825 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.211011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.211041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.211168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.211198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.211347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.211377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.211598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.211668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.211812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.211845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.211957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.211986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.212178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.212208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.212398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.212429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.212599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.212628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.212798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.212827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.212997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.213026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.213236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.213268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.213444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.213473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.213600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.213630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.213817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.213847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.214018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.214047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.214250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.214281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.214465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.214495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.214676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.214706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.214894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.214924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.215120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.215150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.215321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.215352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.215468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.215498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.215701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.215730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.215845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.215875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.215980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.216008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.216201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.216239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.216344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.216374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.216630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.216659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.216781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.216810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.217071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.217106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.217207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.217244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.217414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.217445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.217630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.826 [2024-07-12 19:20:03.217659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.826 qpair failed and we were unable to recover it. 00:28:00.826 [2024-07-12 19:20:03.217771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.217801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.217905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.217934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.218127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.218157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.218288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.218319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.218496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.218526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.218760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.218790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.218903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.218932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.219114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.219144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.219260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.219290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.219400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.219429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.219557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.219586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.219754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.219784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.219881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.219910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.220030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.220059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.220169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.220198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.220329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.220361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.220600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.220630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.220805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.220835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.220953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.220983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.221105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.221134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.221314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.221344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.221531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.221561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.221685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.221713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.221887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.221916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.222095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.222124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.222246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.222278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.222455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.222485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.222594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.222623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.222748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.222777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.222891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.222921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.223031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.223060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.223164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.223193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.223334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.223364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.223482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.827 [2024-07-12 19:20:03.223512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.827 qpair failed and we were unable to recover it. 00:28:00.827 [2024-07-12 19:20:03.223679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.223708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.223820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.223849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.224041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.224070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.224247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.224288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.224469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.224498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.224679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.224709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.224958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.224987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.225106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.225135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.225255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.225285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.225394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.225423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.225570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.225600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.225709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.225738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.225857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.225886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.225988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.226017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.226187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.226216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.226405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.226435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.226603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.226633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.226748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.226777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.226979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.227007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.227129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.227158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.227326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.227356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.227594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.227624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.227806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.227836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.228092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.228122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.228310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.228340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.228444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.228473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.228583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.228612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.228720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.228750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.228869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.228898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.229154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.229184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.229320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.229355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.828 [2024-07-12 19:20:03.229473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.828 [2024-07-12 19:20:03.229502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.828 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.229735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.229764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.229949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.229979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.230146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.230175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.230298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.230328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.230457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.230485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.230654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.230683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.230862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.230891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.231082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.231111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.231278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.231307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.231431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.231461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.231573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.231602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.231726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.231754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.231988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.232056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.232200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.232263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.232508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.232539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.232725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.232755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.232997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.233027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.233269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.233300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.233474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.233503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.233618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.233647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.233768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.233797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.233965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.233995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.234165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.234194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.234331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.234361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.234556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.234586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.234693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.234732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.829 qpair failed and we were unable to recover it. 00:28:00.829 [2024-07-12 19:20:03.234862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.829 [2024-07-12 19:20:03.234890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.235014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.235043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.235150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.235178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.235370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.235399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.235513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.235544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.235664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.235692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.235864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.235894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.235995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.236025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.236193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.236222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.236398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.236427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.236675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.236705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.236882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.236912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.237023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.237051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.237166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.237196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.237459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.237490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.237594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.237623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.237809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.237838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.238061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.238090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.238193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.238222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.238358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.238387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.238514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.238544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.238719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.238748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.238919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.238948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.239075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.239103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.239212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.239250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.239419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.239448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.239693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.239722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.239836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.239864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.239966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.239995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.240161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.240189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.240384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.240414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.240537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.240566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.240674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.240702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.240870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.240900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.241011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.241039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.241159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.241187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.241375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.241405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.241589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.241618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.241740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.241769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.830 [2024-07-12 19:20:03.241975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.830 [2024-07-12 19:20:03.242009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.830 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.242149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.242178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.242302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.242333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.242517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.242547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.242729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.242759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.242939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.242968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.243064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.243094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.243275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.243306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.243474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.243503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.243621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.243650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.243862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.243890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.243989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.244018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.244116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.244145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.244320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.244349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.244572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.244602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.244772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.244801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.244970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.244999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.245110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.245138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.245311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.245342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.245517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.245545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.245661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.245690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.245831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.245859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.245974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.246003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.246105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.246134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.246244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.246274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.246450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.246480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.246590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.246620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.246797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.246866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.247090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.247124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.247247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.247280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.247456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.247486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.247666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.247696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.247952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.247982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.248157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.248186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.248316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.248347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.248534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.248564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.248731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.248760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.248931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.831 [2024-07-12 19:20:03.248961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.831 qpair failed and we were unable to recover it. 00:28:00.831 [2024-07-12 19:20:03.249068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.249098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.249220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.249265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.249533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.249571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.249757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.249787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.249977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.250007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.250115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.250143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.250347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.250378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.252124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.252177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.252319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.252350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.252523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.252551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.252786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.252816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.252934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.252964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.253161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.253190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.253381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.253412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.253605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.253635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.253868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.253897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.254012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.254042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.254145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.254174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.254307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.254339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.254464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.254493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.254690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.254720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.254903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.254934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.255029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.255059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.255177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.255206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.255402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.255432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.255623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.255653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.255828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.255857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.256037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.256066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.256188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.256217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.256422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.256452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.256568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.256597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.256770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.256799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.256929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.256958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.257198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.257243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.257434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.257463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.257648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.257677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.832 qpair failed and we were unable to recover it. 00:28:00.832 [2024-07-12 19:20:03.257855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.832 [2024-07-12 19:20:03.257885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.258064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.258093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.258215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.258256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.258389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.258419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.258527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.258556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.258659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.258688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.258794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.258828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.258969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.258999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.259187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.259216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.259437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.259467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.259636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.259664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.259778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.259808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.260016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.260045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.260240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.260270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.260380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.260410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.260513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.260542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.260723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.260751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.260936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.260965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.261084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.261113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.261238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.261268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.261395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.261425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.261641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.261671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.261778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.261807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.261917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.261947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.262115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.262144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.262263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.262295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.262504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.262534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.262728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.262758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.262933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.262963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.263090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.263120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.263308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.263339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.263468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.263498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.263609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.263639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.263761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.263791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.263915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.263945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.264112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.264142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.264270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.264302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.264425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.264454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.264574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.264603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.264713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.833 [2024-07-12 19:20:03.264742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.833 qpair failed and we were unable to recover it. 00:28:00.833 [2024-07-12 19:20:03.264921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.264951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.265053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.265083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.265217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.265272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.265392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.265422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.265523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.265553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.265677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.265707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.265914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.265951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.266185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.266213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.266406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.266436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.266560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.266590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.266706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.266735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.266845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.266874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.266999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.267029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.267137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.267167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.267422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.267453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.267557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.267586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.267712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.267742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.267921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.267951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.268075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.268103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.268205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.268243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.268376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.268407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.268589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.268619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.268739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.268768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.268875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.268905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.269011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.269040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.269215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.269254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.269365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.269395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.269503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.269532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.269660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.269690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.269821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.269850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.269959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.269988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.270164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.834 [2024-07-12 19:20:03.270194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.834 qpair failed and we were unable to recover it. 00:28:00.834 [2024-07-12 19:20:03.270392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.270423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.270643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.270710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.270833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.270866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.270980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.271011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.271187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.271217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.271357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.271387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.271556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.271585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.271822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.271852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.271963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.271992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.272119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.272147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.272273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.272304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.272418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.272447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.272562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.272591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.272712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.272742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.272863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.272892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.273076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.273105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.273213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.273251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.273444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.273474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.273588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.273618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.273721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.273750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.273865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.273895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.274063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.274092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.274277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.274308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.274487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.274516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.274618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.274647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.274756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.274785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.274979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.275008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.275119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.275148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.275327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.275363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.275571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.275601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.275710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.275740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.275844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.275873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.276039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.276069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.276179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.276209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.276334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.276365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.276601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.276630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.276733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.276762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.276951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.835 [2024-07-12 19:20:03.276980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.835 qpair failed and we were unable to recover it. 00:28:00.835 [2024-07-12 19:20:03.277087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.277116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.277247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.277278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.277450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.277479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.277655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.277684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.277806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.277837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.278020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.278049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.278169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.278198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.278369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.278438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.278639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.278673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.278781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.278810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.278924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.278954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.279138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.279167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.279350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.279381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.279583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.279613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.279712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.279742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.279848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.279878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.279983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.280012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.280203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.280263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.280371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.280401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.280538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.280568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.280673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.280703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.280815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.280844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.281007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.281036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.281149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.281178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.281296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.281327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.281495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.281525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.281696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.281726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.281827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.281856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.281961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.281990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.282166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.282196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.282314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.282344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.282474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.282504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.282610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.282640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.282756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.282785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.282880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.282909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.283091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.283122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.283289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.283320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.283426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.283456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.283582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.283612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.283790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.283820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.284005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.284035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.284208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.284246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.836 [2024-07-12 19:20:03.284349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.836 [2024-07-12 19:20:03.284379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.836 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.284556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.284586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.284712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.284752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.284940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.284970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.285085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.285116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.285295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.285325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.285430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.285460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.285576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.285606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.285719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.285748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.285942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.285972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.286196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.286235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.286403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.286432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.286533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.286562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.286669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.286699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.286812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.286842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.287026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.287064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.287173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.287202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.287384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.287416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.287538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.287567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.287744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.287773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.287986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.288016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.288137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.288166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.288373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.288405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.288506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.288535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.288641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.288670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.288847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.288877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.288989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.289019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.289123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.289152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.289354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.289386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.289499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.289529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.289646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.289676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.289788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.289818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.290002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.290031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.290139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.290169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.290364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.290395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.290583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.290611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.290783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.290812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.290998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.291028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.291132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.291161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.291400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.291432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.291616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.291646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.291739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.291768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.837 [2024-07-12 19:20:03.292031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.837 [2024-07-12 19:20:03.292100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.837 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.292380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.292418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.292528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.292558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.292684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.292713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.292879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.292909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.293122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.293152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.293271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.293302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.293496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.293524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.293648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.293678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.293865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.293895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.294084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.294112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.294351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.294382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.294495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.294526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.294648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.294686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.294921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.294951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.295058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.295087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.295343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.295373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.295552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.295583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.295703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.295732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.295850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.295880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.296053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.296082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.296209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.296253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.296362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.296392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.296585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.296614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.296804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.296834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.297027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.297056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.297165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.297195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.297456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.297487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.297675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.297705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.297941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.297970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.298141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.298170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.298287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.298318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.838 qpair failed and we were unable to recover it. 00:28:00.838 [2024-07-12 19:20:03.298422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.838 [2024-07-12 19:20:03.298452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.298631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.298661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.298831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.298862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.298973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.299002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.299246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.299277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.299397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.299429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.299604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.299633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.299819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.299849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.300026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.300057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.300241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.300271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.300558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.300589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.300766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.300795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.300933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.300962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.301141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.301171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.301282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.301313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.301522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.301552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.301715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.301744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.301856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.301885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.302070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.302100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.302296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.302327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.302539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.302568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.302751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.302786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.302905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.302934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.303123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.303153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.303366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.303398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.303613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.303642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.303814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.303843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.303945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.303974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.304170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.304199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.304317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.304347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.304467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.304496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.304757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.304787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.304899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.304928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.305024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.305054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.305171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.305201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.305413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.305444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.305574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.305604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.305706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.305735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.305843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.305872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.305989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.306018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.306125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.839 [2024-07-12 19:20:03.306154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.839 qpair failed and we were unable to recover it. 00:28:00.839 [2024-07-12 19:20:03.306410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.306441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.306558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.306588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.306701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.306730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.306837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.306874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.306981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.307010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.307194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.307223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.307417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.307447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.307556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.307586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.307712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.307742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.307854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.307883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.308101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.308129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.308304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.308334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.308451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.308481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.308691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.308721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.308854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.308882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.309093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.309122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.309235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.309266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.309506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.309535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.309660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.309689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.309811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.309840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.309948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.309983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.310185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.310215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.310361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.310390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.310574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.310604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.310719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.310749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.310872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.310901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.311024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.311054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.311175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.311205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.311360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.311389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.311497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.311526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.311646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.311675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.311783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.311813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.311990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.312020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.312192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.312221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.312375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.312405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.312572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.312602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.312780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.312809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.312983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.313013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.313113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.313142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.313273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.313305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.313553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.313583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.313715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.313744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.313999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.314028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.840 [2024-07-12 19:20:03.314222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.840 [2024-07-12 19:20:03.314262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.840 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.314480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.314509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.314636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.314665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.314848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.314878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.314981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.315020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.315125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.315155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.315364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.315395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.315670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.315700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.315822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.315852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.315949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.315978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.316085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.316114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.316302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.316332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.316514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.316543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.316716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.316745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.316928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.316958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.317143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.317172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.317317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.317348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.317543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.317572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.317683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.317712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.317827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.317856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.317953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.317983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.318109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.318138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.318307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.318338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.318442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.318472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.318599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.318629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.318754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.318784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.318895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.318925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.319204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.319242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.319353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.319382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.319581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.319611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.319743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.319772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.319905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.319935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.320133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.320162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.320362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.320392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.320571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.320600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.320823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.320851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.320963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.320992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.321209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.321246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.321436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.321465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.321657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.321686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.321874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.321904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.322029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.322058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.322240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.322270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.322502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.322531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.322646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.322681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.322781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.322810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.322921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.322950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.323120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.323148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.323253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.323283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.323472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.323502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.323769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.323797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.323899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.841 [2024-07-12 19:20:03.323928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.841 qpair failed and we were unable to recover it. 00:28:00.841 [2024-07-12 19:20:03.324036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.324064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.324297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.324327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.324440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.324468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.324576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.324605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.324708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.324737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.324864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.324893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.325000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.325029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.325133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.325163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.325292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.325322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.325499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.325528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.325725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.325754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.325850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.325880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.326063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.326092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.326220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.326259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.326496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.326524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.326645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.326674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.326783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.326813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.326995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.327024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.327122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.327151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.327285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.327315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.327550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.327579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.327748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.327777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.327973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.328002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.328213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.328252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.328356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.328385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.328594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.328622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.328901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.328930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.329059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.329089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.329196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.329232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.329351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.329381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.329555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.329583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.329783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.329812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.330000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.330034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.330152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.330181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.330428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.330458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.330650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.330679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.330788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.330817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.330931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.330961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.331068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.331096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.331327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.331358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.331544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.331573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.331677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.331706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.331839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.331868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.332037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.332065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.332258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.332288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.332487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.332516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.332693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.332723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.842 qpair failed and we were unable to recover it. 00:28:00.842 [2024-07-12 19:20:03.333003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.842 [2024-07-12 19:20:03.333032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.333220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.333270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.333510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.333539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.333659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.333688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.333858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.333888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.334056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.334084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.334209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.334245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.334369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.334399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.334520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.334549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.334747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.334777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.334895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.334924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.335041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.335070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.335179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.335208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.335458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.335488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.335743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.335774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.335941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.843 [2024-07-12 19:20:03.335970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:00.843 qpair failed and we were unable to recover it. 00:28:00.843 [2024-07-12 19:20:03.336104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.127 [2024-07-12 19:20:03.336132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.127 qpair failed and we were unable to recover it. 00:28:01.127 [2024-07-12 19:20:03.336244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.127 [2024-07-12 19:20:03.336275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.127 qpair failed and we were unable to recover it. 00:28:01.127 [2024-07-12 19:20:03.336449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.127 [2024-07-12 19:20:03.336479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.127 qpair failed and we were unable to recover it. 00:28:01.127 [2024-07-12 19:20:03.336652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.127 [2024-07-12 19:20:03.336681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.127 qpair failed and we were unable to recover it. 00:28:01.127 [2024-07-12 19:20:03.336883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.127 [2024-07-12 19:20:03.336912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.127 qpair failed and we were unable to recover it. 00:28:01.127 [2024-07-12 19:20:03.337038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.127 [2024-07-12 19:20:03.337066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.337181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.337211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.337343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.337373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.337497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.337526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.337634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.337669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.337773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.337802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.337976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.338005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.338117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.338146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.338324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.338354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.338466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.338495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.338678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.338706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.338889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.338918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.339016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.339045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.339170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.339200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.339315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.339345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.339445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.339474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.339740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.339770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.339884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.339913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.340021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.340050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.340163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.340192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.340320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.340350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.340470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.340500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.340604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.340633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.340801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.340830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.341009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.341038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.341157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.341186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.341369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.341398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.341584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.341613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.341848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.341877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.342070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.342099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.342220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.342258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.342387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.342417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.342523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.342551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.342742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.342771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.342943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.342971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.343142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.343171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.343361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.343392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.343556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.343585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.343778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.343806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.343986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.344015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.128 [2024-07-12 19:20:03.344192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.128 [2024-07-12 19:20:03.344221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.128 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.344334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.344363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.344536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.344565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.344739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.344768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.344926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.344961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.345156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.345184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.345361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.345391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.345499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.345528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.345724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.345753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.345854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.345882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.346015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.346044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.346220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.346274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.346528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.346558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.346746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.346775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.346947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.346976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.347160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.347189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.347300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.347330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.347439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.347469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.347584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.347613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.347843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.347873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.348050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.348079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.348254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.348285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.348411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.348441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.348615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.348644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.348745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.348774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.348957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.348986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.349106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.349136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.349244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.349273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.349391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.349420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.349549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.349577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.349677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.349705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.349859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.349889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.350068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.350097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.350206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.350242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.350348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.350377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.350478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.350508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.350624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.350653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.350820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.350849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.351021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.351050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.351240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.351271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.351468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.129 [2024-07-12 19:20:03.351497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.129 qpair failed and we were unable to recover it. 00:28:01.129 [2024-07-12 19:20:03.351616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.351649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.351872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.351902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.352087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.352116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.352348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.352390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.352631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.352660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.352791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.352820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.352924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.352954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.353142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.353170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.353281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.353311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.353423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.353452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.353617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.353646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.353906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.353935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.354145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.354174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.354316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.354346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.354604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.354634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.354808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.354838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.354958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.354987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.355186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.355215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.355402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.355432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.355607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.355635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.355750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.355779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.355947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.355977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.356174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.356203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.356406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.356436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.356695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.356724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.356904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.356934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.357125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.357154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.357269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.357299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.357434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.357464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.357568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.357598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.357786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.357816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.357918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.357947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.358112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.358142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.358268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.358299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.358491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.358519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.358700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.358729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.358910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.358940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.359120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.359149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.359337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.359367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.359538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.359567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.130 [2024-07-12 19:20:03.359754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.130 [2024-07-12 19:20:03.359783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.130 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.359957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.359987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.360102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.360131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.360303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.360339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.360468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.360497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.360778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.360806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.360980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.361009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.361120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.361148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.361276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.361307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.361493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.361522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.361748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.361777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.361969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.361999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.362185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.362215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.362508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.362539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.362708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.362737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.362859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.362888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.363070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.363100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.363238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.363269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.363386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.363415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.363673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.363702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.363822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.363851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.364087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.364117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.364243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.364273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.364509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.364539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.364652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.364681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.364793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.364822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.364987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.365016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.365198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.365238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.365366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.365396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.365573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.365602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.365715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.365745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.365916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.365945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.366162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.366191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.366404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.366435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.366566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.366595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.366721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.366750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.366875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.366903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.367091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.367121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.367298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.367328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.367445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.367474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.367648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.131 [2024-07-12 19:20:03.367677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.131 qpair failed and we were unable to recover it. 00:28:01.131 [2024-07-12 19:20:03.367792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.367820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.367932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.367962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.368130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.368165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.368379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.368408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.368528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.368558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.368657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.368686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.368918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.368947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.369136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.369165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.369343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.369373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.369563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.369592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.369877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.369906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.370015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.370044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.370216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.370261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.370372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.370401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.370510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.370539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.370723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.370752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.370961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.370991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.371250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.371280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.371462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.371492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.371663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.371693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.371864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.371892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.372059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.372088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.372259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.372289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.372521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.372549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.372659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.372688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.372880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.372909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.373167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.373196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.373506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.373536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.373652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.373681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.373857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.373887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.374022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.374051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.374188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.374217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.374430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.374459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.132 qpair failed and we were unable to recover it. 00:28:01.132 [2024-07-12 19:20:03.374698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.132 [2024-07-12 19:20:03.374727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.374911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.374940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.375060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.375089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.375301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.375331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.375569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.375598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.375763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.375792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.375916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.375946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.376133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.376162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.376341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.376371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.376478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.376513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.376697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.376726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.376910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.376939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.377169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.377199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.377477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.377507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.377656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.377686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.377884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.377913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.378053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.378082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.378331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.378360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.378528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.378557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.378728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.378758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.378956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.378985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.379168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.379197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.379377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.379408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.379611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.379642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.379815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.379845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.380044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.380073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.380181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.380210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.380433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.380463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.380647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.380676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.380858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.380888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.381057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.381085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.381198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.381239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.381420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.381450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.381627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.381657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.381766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.381795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.381920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.381949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.382150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.382181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.382362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.382393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.382569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.382598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.382713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.382743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.382840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.133 [2024-07-12 19:20:03.382869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.133 qpair failed and we were unable to recover it. 00:28:01.133 [2024-07-12 19:20:03.382969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.382998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.383104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.383134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.383382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.383413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.383604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.383633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.383869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.383899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.384087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.384117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.384283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.384313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.384433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.384463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.384611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.384645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.384753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.384782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.384961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.384991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.385154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.385184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.385465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.385495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.385627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.385657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.385780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.385808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.385988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.386017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.386147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.386176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.386295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.386325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.386612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.386642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.386819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.386848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.386965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.386994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.387124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.387153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.387287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.387319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.387419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.387448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.387577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.387606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.387777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.387806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.387975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.388005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.388195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.388233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.388417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.388447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.388565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.388595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.388696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.388726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.388897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.388926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.389097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.389127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.389250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.389281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.389451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.389480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.389669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.389700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.389817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.389846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.389967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.389997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.390259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.390289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.390483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.134 [2024-07-12 19:20:03.390513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.134 qpair failed and we were unable to recover it. 00:28:01.134 [2024-07-12 19:20:03.390639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.390668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.390800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.390829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.390946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.390976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.391143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.391172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.391381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.391411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.391618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.391647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.391819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.391848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.391952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.391981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.392158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.392196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.392382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.392412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.392603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.392633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.392808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.392838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.393047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.393076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.393259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.393290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.393413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.393442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.393641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.393671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.393773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.393802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.393976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.394005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.394136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.394165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.394278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.394307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.394521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.394550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.394728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.394758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.394996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.395026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.395194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.395234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.395446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.395476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.395646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.395676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.395844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.395873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.395989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.396019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.396135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.396165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.396331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.396361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.396595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.396626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.396814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.396844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.397088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.397117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.397291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.397322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.397444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.397474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.397580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.397610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.397789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.397819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.398073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.398103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.398354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.398384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.398554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.398583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.398750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.135 [2024-07-12 19:20:03.398780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.135 qpair failed and we were unable to recover it. 00:28:01.135 [2024-07-12 19:20:03.398962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.398991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.399196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.399234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.399416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.399446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.399612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.399642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.399754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.399783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.399963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.399992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.400201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.400237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.400416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.400452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.400716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.400745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.400861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.400891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.401019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.401048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.401162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.401192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.401375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.401405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.401540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.401569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.401710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.401741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.401864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.401892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.402060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.402090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.402292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.402323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.402503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.402532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.402644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.402673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.402858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.402887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.403009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.403038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.403157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.403186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.403315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.403345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.403477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.403506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.403687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.403716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.403899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.403928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.404099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.404130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.404325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.404355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.404478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.404507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.404689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.404719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.404920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.404949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.405069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.405098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.405278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.405308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.136 qpair failed and we were unable to recover it. 00:28:01.136 [2024-07-12 19:20:03.405487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.136 [2024-07-12 19:20:03.405517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.405747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.405777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.405908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.405938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.406117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.406147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.406357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.406388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.406644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.406673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.406841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.406870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.407043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.407073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.407192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.407222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.407408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.407438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.407672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.407702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.407993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.408022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.408141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.408171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.408401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.408436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.408618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.408647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.408772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.408801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.408979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.409008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.409126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.409156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.409326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.409356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.409483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.409512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.409631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.409660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.409897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.409927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.410052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.410081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.410300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.410330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.410477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.410506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.410676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.410705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.410816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.410845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.411033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.411063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.411185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.411214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.411366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.411396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.411590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.411619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.411877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.411907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.412091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.412121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.412241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.412271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.412456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.412486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.412591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.412620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.412859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.412888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.413004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.413034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.413153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.413182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.413329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.413359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.413467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.137 [2024-07-12 19:20:03.413498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.137 qpair failed and we were unable to recover it. 00:28:01.137 [2024-07-12 19:20:03.413598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.413627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.413827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.413857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.414038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.414068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.414246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.414276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.414382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.414411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.414596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.414626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.414849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.414879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.414998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.415027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.415171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.415201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.415368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.415399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.415515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.415545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.415716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.415746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.415913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.415943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.416120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.416150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.416260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.416289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.416476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.416506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.416673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.416703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.416814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.416843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.416969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.416999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.417122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.417152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.417319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.417349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.417462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.417491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.417644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.417674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.417795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.417824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.417996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.418025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.418206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.418246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.418362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.418392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.418560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.418590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.418714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.418743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.418925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.418954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.419103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.419133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.419301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.419331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.419450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.419479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.422384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.422418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.422607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.422636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.422765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.422795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.423003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.423032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.423147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.423177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.423302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.423332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.423465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.423499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.138 [2024-07-12 19:20:03.423673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.138 [2024-07-12 19:20:03.423703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.138 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.423813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.423842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.424034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.424063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.424177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.424207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.424523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.424553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.424653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.424682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.424809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.424838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.424947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.424976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.425172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.425201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.425447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.425477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.425657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.425686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.425787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.425816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.425929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.425958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.426199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.426239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.426492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.426522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.426637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.426667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.426795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.426824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.426997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.427026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.427274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.427304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.427429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.427458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.427653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.427683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.427857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.427886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.427988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.428017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.428184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.428213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.428322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.428352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.428521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.428552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.428745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.428775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.428961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.428991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.429089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.429119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.429248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.429278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.429378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.429407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.429573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.429602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.429705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.429734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.429835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.429863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.430029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.430058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.430176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.430205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.430405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.430435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.430600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.430629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.430742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.430771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.430909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.430943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.431180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.139 [2024-07-12 19:20:03.431209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.139 qpair failed and we were unable to recover it. 00:28:01.139 [2024-07-12 19:20:03.431399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.431429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.431617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.431647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.431779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.431808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.431929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.431959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.432191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.432221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.432425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.432455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.432575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.432604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.432841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.432870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.432987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.433015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.433122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.433152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.433291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.433321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.433450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.433479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.433599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.433629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.433806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.433835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.433948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.433978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.434161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.434191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.434319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.434350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.434470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.434499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.434677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.434706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.434810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.434840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.434959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.434988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.435105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.435134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.435256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.435287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.435411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.435441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.435633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.435662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.435848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.435877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.435980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.436010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.436201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.436239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.436360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.436390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.436560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.436589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.436718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.436748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.436873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.436902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.437107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.437136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.437304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.437334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.437461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.437490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.140 qpair failed and we were unable to recover it. 00:28:01.140 [2024-07-12 19:20:03.437589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.140 [2024-07-12 19:20:03.437619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.437854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.437883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.438053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.438083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.438200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.438244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.438444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.438473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.438584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.438613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.438789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.438819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.438918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.438948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.439061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.439090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.439297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.439326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.439514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.439544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.439658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.439688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.439886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.439914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.440096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.440127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.440322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.440352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.440585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.440615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.440730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.440759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.440877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.440907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.441089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.441118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.441257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.441289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.441396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.441425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.441590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.441620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.441734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.441764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.441942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.441972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.442141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.442169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.442295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.442327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.442438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.442468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.442637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.442666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.442918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.442947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.443113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.443142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.443260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.443291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.443400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.443430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.443618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.443648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.443745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.443774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.443879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.443908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.444016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.444045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.444305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.444335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.444569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.444598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.444804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.444834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.444950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.444980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.141 [2024-07-12 19:20:03.445096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.141 [2024-07-12 19:20:03.445125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.141 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.445304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.445335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.445458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.445487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.445721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.445754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.445959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.445989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.446104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.446133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.446254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.446283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.446418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.446448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.446595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.446625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.447126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.447157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.447305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.447335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.447458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.447488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.447600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.447629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.447879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.447908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.448099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.448129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.448259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.448289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.448456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.448485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.448607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.448637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.448819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.448848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.448946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.448975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.449151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.449181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.449299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.449329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.449441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.449471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.449576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.449605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.449774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.449803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.449974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.450003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.450197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.450233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.450372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.450402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.450578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.450607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.450844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.450874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.451004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.451034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.451162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.451193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.451399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.451430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.451554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.451583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.451761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.451793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.451921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.451951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.452054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.452082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.452184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.452215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.452334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.452364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.452472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.452502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.452683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.142 [2024-07-12 19:20:03.452712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.142 qpair failed and we were unable to recover it. 00:28:01.142 [2024-07-12 19:20:03.452854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.452883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.453063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.453092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.453276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.453312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.453430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.453460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.453560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.453590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.453764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.453793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.453906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.453936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.454111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.454140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.454383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.454413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.454526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.454556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.454724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.454753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.454922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.454952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.455127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.455156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.455280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.455311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.455417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.455446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.455640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.455668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.455785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.455815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.455928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.455958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.456166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.456196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.456392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.456422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.456556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.456585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.456701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.456731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.456927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.456956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.457067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.457096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.457358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.457388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.457616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.457646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.457759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.457788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.457910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.457939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.458053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.458082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.458198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.458236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.458420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.458449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.458558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.458588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.458756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.458785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.459005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.459034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.459204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.459241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.459477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.459507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.459671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.459700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.459814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.459844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.459974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.460003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.460116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.460146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.143 [2024-07-12 19:20:03.460261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.143 [2024-07-12 19:20:03.460291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.143 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.460396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.460425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.460596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.460630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.460740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.460769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.460872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.460901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.461008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.461036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.461266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.461297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.461509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.461538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.461645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.461674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.461853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.461882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.461984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.462013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.462133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.462162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.462293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.462323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.462438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.462468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.462579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.462609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.462778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.462807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.462987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.463017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.463188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.463217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.463422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.463451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.463626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.463656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.463761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.463791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.463903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.463933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.464053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.464082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.464214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.464252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.464432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.464463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.464565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.464594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.464697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.464726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.464924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.464953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.465210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.465249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.465434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.465465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.465638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.465668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.465784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.465814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.465927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.465956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.466067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.466096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.466233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.466264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.466367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.466396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.466508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.466538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.466661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.466691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.466808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.466837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.466948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.466978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.467127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.144 [2024-07-12 19:20:03.467156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.144 qpair failed and we were unable to recover it. 00:28:01.144 [2024-07-12 19:20:03.467256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.467287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.467398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.467433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.467615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.467644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.467818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.467848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.467958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.467987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.468105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.468135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.468331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.468361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.468472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.468501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.468670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.468700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.468825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.468854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.468969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.468998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.469101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.469131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.469305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.469335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.469471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.469500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.469609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.469639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.469816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.469846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.469994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.470024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.470166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.470196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.470321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.470352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.470551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.470581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.470753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.470783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.470883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.470912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.471094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.471123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.471306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.471336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.471464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.471493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.471680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.471710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.471896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.471925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.472029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.472058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.472171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.472202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.472393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.472423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.472536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.472565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.472703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.472733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.472914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.472944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.473178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.473208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.473391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.473420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.145 qpair failed and we were unable to recover it. 00:28:01.145 [2024-07-12 19:20:03.473658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.145 [2024-07-12 19:20:03.473688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.473802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.473832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.474005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.474035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.474149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.474179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.474374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.474404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.474526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.474556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.474731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.474766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.474880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.474909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.475082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.475112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.475302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.475332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.475514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.475544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.475735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.475764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.475891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.475921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.476039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.476069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.476189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.476218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.476325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.476354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.476544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.476573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.476697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.476726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.476829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.476858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.476971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.477001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.477179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.477210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.477426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.477457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.477576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.477613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.477865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.477895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.478006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.478035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.478222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.478259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.478390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.478419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.478532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.478561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.478683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.478712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.478816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.478845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.478944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.478973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.479093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.479122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.479238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.479268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.479318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfa000 (9): Bad file descriptor 00:28:01.146 [2024-07-12 19:20:03.479590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.479659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.479872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.479907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.480163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.480193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.480426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.480458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.480628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.480658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.480772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.480802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.480939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.146 [2024-07-12 19:20:03.480968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.146 qpair failed and we were unable to recover it. 00:28:01.146 [2024-07-12 19:20:03.481093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.481122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.481247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.481277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.481380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.481410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.481522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.481553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.481735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.481764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.481893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.481923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.482053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.482083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.482321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.482353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.482527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.482556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.482669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.482698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.482799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.482829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.482946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.482975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.483079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.483109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.483285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.483316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.483433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.483462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.483694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.483724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.483834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.483864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.483975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.484004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.484105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.484134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.484305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.484340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.484461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.484491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.484671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.484701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.484820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.484849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.484961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.484991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.485097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.485126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.485303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.485333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.485449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.485479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.485581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.485611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.485717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.485746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.485948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.485978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.486086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.486116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.486299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.486330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.486465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.486494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.486630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.486660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.486831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.486861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.486964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.486994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.487093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.487122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.487250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.487281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.487447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.487478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.487576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.487605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.487719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.147 [2024-07-12 19:20:03.487749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.147 qpair failed and we were unable to recover it. 00:28:01.147 [2024-07-12 19:20:03.487865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.487895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.488120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.488149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.488412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.488442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.488574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.488603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.488773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.488804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.488998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.489029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.489133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.489162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.489343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.489375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.489481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.489510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.489616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.489646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.489809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.489838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.489939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.489968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.490094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.490124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.490245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.490276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.490376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.490405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.490582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.490612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.490724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.490753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.490887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.490917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.491093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.491127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.491248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.491279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.491389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.491418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.491527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.491556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.491668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.491697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.491943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.491972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.492245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.492276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.492414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.492444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.492626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.492656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.492834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.492864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.492990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.493019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.493133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.493163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.493399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.493429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.493543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.493572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.493688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.493717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.493825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.493856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.494034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.494064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.494259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.494289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.494418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.494449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.494561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.494591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.494826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.494855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.494973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.495002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.148 qpair failed and we were unable to recover it. 00:28:01.148 [2024-07-12 19:20:03.495187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.148 [2024-07-12 19:20:03.495216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.495341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.495371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.495605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.495634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.495814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.495842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.496027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.496057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.496176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.496207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.496322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.496352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.496455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.496484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.496650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.496680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.496799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.496828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.497001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.497030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.497212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.497272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.497379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.497409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.497585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.497614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.497719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.497748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.497863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.497893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.498096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.498125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.498237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.498268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.498396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.498430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.498551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.498580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.498715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.498745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.498859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.498888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.499061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.499090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.499214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.499254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.499353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.499382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.499553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.499582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.499697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.499729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.499848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.499877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.499994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.500023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.500198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.500237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.500341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.500370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.500472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.500501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.500608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.500638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.500836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.500866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.500981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.501010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.149 qpair failed and we were unable to recover it. 00:28:01.149 [2024-07-12 19:20:03.501125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.149 [2024-07-12 19:20:03.501154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.501329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.501360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.501467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.501496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.501732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.501761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.501885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.501914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.502095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.502125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.502242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.502272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.502443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.502472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.502709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.502739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.502846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.502875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.502985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.503015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.503156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.503186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.503293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.503323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.503425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.503454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.503661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.503691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.503866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.503896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.504133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.504163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.504264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.504294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.504465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.504494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.504675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.504704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.504891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.504921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.505116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.505146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.505263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.505293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.505404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.505439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.505647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.505677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.505786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.505815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.505918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.505947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.506168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.506197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.506378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.506447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.506731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.506765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.506955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.506986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.507086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.507116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.507291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.507322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.507493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.507522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.507694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.507723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.507849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.507879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.507976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.508005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.508254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.508285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.508457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.508486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.508603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.508632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.150 [2024-07-12 19:20:03.508757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.150 [2024-07-12 19:20:03.508785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.150 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.508958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.508987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.509090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.509118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.509291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.509321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.509431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.509461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.509630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.509658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.509762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.509791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.510033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.510062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.510182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.510212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.510421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.510451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.510613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.510681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.510881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.510915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.511052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.511082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.511291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.511325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.511491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.511522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.511632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.511662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.511828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.511857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.512055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.512084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.512306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.512338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.512548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.512577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.512751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.512780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.512948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.512978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.513112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.513142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.513338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.513368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.513484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.513514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.513625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.513655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.513769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.513798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.513916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.513946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.514137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.514166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.514341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.514372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.514555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.514584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.514682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.514711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.514888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.514917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.515040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.515070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.515196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.515241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.515413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.515443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.515542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.515571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.515754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.515789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.515927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.515956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.516126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.516155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.516267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.516297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.516543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.151 [2024-07-12 19:20:03.516572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.151 qpair failed and we were unable to recover it. 00:28:01.151 [2024-07-12 19:20:03.516743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.516772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.516909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.516939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.517125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.517154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.517401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.517436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.517551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.517581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.517774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.517803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.518059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.518089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.518271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.518301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.518437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.518466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.518651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.518680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.518853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.518882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.519004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.519033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.519158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.519186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.519382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.519411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.519586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.519615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.519817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.519845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.519948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.519978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.520153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.520182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.520368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.520397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.520631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.520660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.520891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.520920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.521149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.521178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.521309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.521344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.521609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.521638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.521741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.521770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.521873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.521901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.522159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.522188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.522295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.522325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.522450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.522479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.522655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.522684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.522809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.522843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.522970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.522998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.523168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.523197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.523377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.523445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.523655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.523689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.523888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.523919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.524042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.524072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.524328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.524358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.524481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.524512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.524627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.524656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.524838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.152 [2024-07-12 19:20:03.524868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.152 qpair failed and we were unable to recover it. 00:28:01.152 [2024-07-12 19:20:03.525074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.525103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.525281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.525312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.525564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.525594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.525769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.525798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.525918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.525947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.526219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.526270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.526450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.526479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.526698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.526727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.526918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.526954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.527155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.527185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.527311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.527342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.527586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.527615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.527787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.527817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.527987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.528017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.528201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.528242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.528362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.528391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.528492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.528522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.528703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.528732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.528964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.528993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.529168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.529197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.529470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.529500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.529730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.529759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.529956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.529986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.530168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.530197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.530441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.530472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.530595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.530625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.530796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.530825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.530953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.530982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.531108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.531137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.531403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.531434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.531607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.531637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.531766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.531795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.532067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.153 [2024-07-12 19:20:03.532096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.153 qpair failed and we were unable to recover it. 00:28:01.153 [2024-07-12 19:20:03.532217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.532256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.532470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.532500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.532739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.532769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.532946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.532975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.533098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.533128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.533314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.533344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.533479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.533507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.533676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.533705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.533874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.533904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.534008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.534037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.534208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.534262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.534379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.534409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.534588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.534618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.534785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.534814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.534986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.535015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.535136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.535170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.535316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.535347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.535553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.535581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.535755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.535783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.535891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.535920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.536094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.536123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.536249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.536280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.536420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.536450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.536622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.536651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.536840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.536869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.537081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.537111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.537371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.537401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.537532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.537561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.537821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.537850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.538094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.538124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.538381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.538412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.538660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.538690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.538864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.538893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.539086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.539116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.539296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.539325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.539437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.539466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.539589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.539618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.539900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.539929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.540039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.540068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.540356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.154 [2024-07-12 19:20:03.540386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.154 qpair failed and we were unable to recover it. 00:28:01.154 [2024-07-12 19:20:03.540589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.540618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.540853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.540882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.541062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.541092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.541299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.541330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.541513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.541542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.541725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.541754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.541873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.541902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.542101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.542131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.542337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.542368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.542572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.542601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.542733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.542762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.543000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.543029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.543150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.543179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.543296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.543327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.543601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.543631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.543802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.543841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.543955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.543985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.544095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.544124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.544360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.544391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.544510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.544539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.544645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.544674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.544943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.544973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.545210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.545270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.545448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.545477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.545659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.545688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.545926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.545955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.546148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.546177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.546296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.546327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.546493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.546522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.546712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.546742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.546931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.546960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.547191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.547221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.547352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.547382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.547561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.547590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.547822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.547851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.548033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.548063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.548244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.548276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.548511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.548541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.548714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.548744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.548914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.548943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.549059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.155 [2024-07-12 19:20:03.549089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.155 qpair failed and we were unable to recover it. 00:28:01.155 [2024-07-12 19:20:03.549321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.549351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.549677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.549744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.549943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.549976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.550099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.550129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.550313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.550346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.550517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.550547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.550662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.550691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.550867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.550896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.551082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.551112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.551292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.551324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.551434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.551463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.551644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.551674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.551863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.551893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.552070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.552100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.552305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.552337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.552526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.552556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.552679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.552708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.552888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.552917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.553042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.553071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.553257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.553289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.553469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.553498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.553778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.553807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.553915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.553945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.554129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.554158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.554361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.554391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.554508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.554537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.554652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.554681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.554886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.554915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.555029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.555064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.555252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.555283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.555542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.555571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.555757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.555786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.555965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.555994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.556119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.556148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.556348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.556378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.556610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.556639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.556823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.556853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.556962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.556991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.557115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.557144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.557378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.557409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.557606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.156 [2024-07-12 19:20:03.557635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.156 qpair failed and we were unable to recover it. 00:28:01.156 [2024-07-12 19:20:03.557743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.557772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.558010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.558039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.558234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.558265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.558387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.558416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.558594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.558623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.558730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.558758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.558871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.558900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.559020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.559049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.559238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.559268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.559388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.559418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.559618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.559648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.559882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.559911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.560078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.560107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.560307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.560337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.560456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.560485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.560606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.560635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.560812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.560840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.561025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.561054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.561243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.561273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.561391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.561420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.561517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.561546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.561725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.561754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.561944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.561973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.562154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.562183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.562357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.562387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.562624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.562654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.562773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.562802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.563038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.563066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.563185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.563215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.563326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.563354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.563469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.563498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.563728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.563757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.563931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.563961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.564159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.564188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.564382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.564413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.564584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.564614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.564737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.564765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.565010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.565039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.565248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.157 [2024-07-12 19:20:03.565278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.157 qpair failed and we were unable to recover it. 00:28:01.157 [2024-07-12 19:20:03.565464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.565494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.565664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.565693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.565867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.565896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.566076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.566106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.566232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.566262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.566442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.566471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.566576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.566604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.566782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.566811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.567000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.567029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.567145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.567173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.567353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.567384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.567502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.567531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.567732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.567762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.567947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.567976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.568075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.568105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.568278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.568308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.568427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.568461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.568653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.568682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.568805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.568835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.569005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.569033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.569251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.569281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.569461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.569490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.569679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.569709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.569887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.569916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.570031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.570060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.570180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.570209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.570423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.570454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.570567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.570597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.570718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.570747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.570943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.570973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.571160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.571190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.571427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.571458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.571637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.571666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.571854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.571883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.572069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.572098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.572248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.572278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.572452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.158 [2024-07-12 19:20:03.572480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.158 qpair failed and we were unable to recover it. 00:28:01.158 [2024-07-12 19:20:03.572682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.572711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.572977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.573006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.573206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.573244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.573371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.573401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.573567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.573596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.573694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.573724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.573849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.573878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.574072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.574101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.574214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.574254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.574356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.574385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.574623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.574652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.574826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.574855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.574976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.575005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.575118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.575147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.575329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.575359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.575474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.575504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.575623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.575652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.575751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.575780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.576021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.576051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.576243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.576273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.576380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.576410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.576530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.576560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.576799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.576828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.576926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.576955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.577126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.577154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.577272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.577301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.577504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.577534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.577733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.577762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.577883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.577913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.578147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.578176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.578462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.578492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.578619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.578648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.578821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.578850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.579084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.579113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.579219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.579259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.579500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.579529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.579787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.579815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.579934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.579963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.580197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.580234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.580359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.580388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.580496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.159 [2024-07-12 19:20:03.580525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.159 qpair failed and we were unable to recover it. 00:28:01.159 [2024-07-12 19:20:03.580759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.580788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.580978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.581007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.581135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.581164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.581275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.581305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.581404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.581433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.581621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.581650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.581817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.581851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.582032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.582061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.582186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.582215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.582401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.582431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.582662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.582691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.582810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.582839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.582953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.582983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.583148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.583177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.583308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.583339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.583509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.583537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.583719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.583747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.583864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.583893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.584073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.584103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.584275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.584305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.584437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.584467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.584596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.584625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.584792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.584821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.584924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.584953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.585075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.585104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.585275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.585305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.585483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.585512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.585684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.585712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.585838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.585866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.586053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.586082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.586254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.586283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.586460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.586489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.586683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.586712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.586960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.586989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.587162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.587191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.587307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.587337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.587508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.587537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.587719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.587748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.588005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.588034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.588140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.588169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.588285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.160 [2024-07-12 19:20:03.588316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.160 qpair failed and we were unable to recover it. 00:28:01.160 [2024-07-12 19:20:03.588552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.588582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.588856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.588885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.589161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.589189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.589395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.589425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.589608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.589638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.589752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.589781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.589896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.589930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.590047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.590077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.590313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.590344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.590462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.590491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.590615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.590644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.590853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.590882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.591056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.591085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.591258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.591288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.591399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.591427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.591680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.591709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.591873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.591902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.592092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.592121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.592297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.592327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.592449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.592478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.592744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.592774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.592966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.592995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.593098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.593126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.593386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.593416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.593540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.593569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.593749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.593778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.593881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.593911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.594147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.594176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.594383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.594413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.594605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.594634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.594809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.594838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.595020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.595048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.595173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.595201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.595412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.595448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.595563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.595591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.595699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.595728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.595909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.595939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.596191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.596219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.596422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.596452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.596565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.596594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.596711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.596740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.161 [2024-07-12 19:20:03.596854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.161 [2024-07-12 19:20:03.596883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.161 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.596983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.597011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.597102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.597130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.597335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.597365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.597599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.597627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.597746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.597775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.598053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.598083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.598257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.598287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.598468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.598498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.598669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.598697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.598931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.598959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.599141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.599171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.599358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.599387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.599555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.599584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.599842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.599871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.600040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.600069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.600261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.600291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.600492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.600522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.600708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.600736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.600917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.600946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.601134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.601164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.601288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.601318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.601601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.601629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.601797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.601826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.602006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.602035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.602293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.602323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.602491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.602519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.602695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.602724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.602904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.602933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.603140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.603168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.603401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.603431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.603714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.603743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.603876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.603906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.604019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.604057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.604320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.604350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.162 qpair failed and we were unable to recover it. 00:28:01.162 [2024-07-12 19:20:03.604536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.162 [2024-07-12 19:20:03.604565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.604819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.604848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.605028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.605057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.605245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.605275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.605487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.605516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.605732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.605761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.605881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.605910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.606141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.606170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.606345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.606375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.606496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.606526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.606699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.606728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.606896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.606925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.607194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.607233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.607356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.607385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.607500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.607529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.607773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.607801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.608024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.608052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.608169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.608198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.608404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.608433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.608622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.608652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.608882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.608911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.609087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.609115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.609217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.609254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.609456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.609484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.609588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.609618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.609727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.609762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.609864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.609893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.610097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.610126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.610248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.610279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.610414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.610443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.610555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.610584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.610870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.610899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.611020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.611049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.611169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.611198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.611401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.611432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.611678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.611707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.611870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.611899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.612061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.612090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.612270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.612300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.612491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.612521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.612693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.612722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.163 [2024-07-12 19:20:03.612819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.163 [2024-07-12 19:20:03.612848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.163 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.613037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.613066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.613241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.613270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.613435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.613465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.613699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.613728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.613849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.613878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.614052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.614081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.614198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.614237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.614424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.614454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.614656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.614685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.614872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.614901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.615002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.615031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.615159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.615188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.615327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.615357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.615523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.615551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.615718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.615747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.615858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.615887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.616058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.616087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.616253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.616283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.616490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.616519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.616694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.616723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.616838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.616867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.616968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.616997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.617114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.617144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.617403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.617432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.617614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.617649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.617886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.617916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.618099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.618127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.618396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.618426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.618553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.618582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.618775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.618804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.619038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.619067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.619188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.619216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.619428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.619458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.619557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.619586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.619694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.619723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.619908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.619937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.620101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.620130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.620248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.620279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.620404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.620434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.620530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.620558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.164 [2024-07-12 19:20:03.620800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.164 [2024-07-12 19:20:03.620829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.164 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.621009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.621037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.621203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.621242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.621480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.621509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.621752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.621781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.622033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.622062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.622298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.622327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.622449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.622478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.622611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.622640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.622751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.622779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.622897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.622925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.623180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.623214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.623476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.623504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.623619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.623646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.623825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.623853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.623970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.623997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.624097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.624124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.624387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.624416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.624654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.624680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.624882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.624910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.625106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.625133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.625254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.625282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.625488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.625516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.625692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.625719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.625888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.625916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.626092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.626120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.626247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.626276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.626396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.626424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.626611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.626638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.626759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.626787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.626914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.626942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.627202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.627238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.627496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.627524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.627694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.627722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.627888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.627916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.628049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.628077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.628267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.628297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.628478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.628507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.628621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.628649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.628769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.628797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.628922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.628949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.629217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.165 [2024-07-12 19:20:03.629252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.165 qpair failed and we were unable to recover it. 00:28:01.165 [2024-07-12 19:20:03.629377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.629405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.629575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.629603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.629786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.629814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.629979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.630007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.630193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.630221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.630417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.630445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.630547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.630575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.630678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.630708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.630894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.630924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.631208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.631246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.631355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.631390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.631623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.631652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.631769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.631798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.631977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.632006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.632139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.632168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.632374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.632406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.632529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.632558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.632667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.632695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.632808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.632837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.633017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.633046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.633153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.633183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.633309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.633339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.633597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.633626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.633747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.633776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.633977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.634006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.634245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.634281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.634529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.634559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.634741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.634771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.634896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.634925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.635050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.635079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.635256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.635286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.635522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.635551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.635689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.635717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.635889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.635918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.636092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.166 [2024-07-12 19:20:03.636121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.166 qpair failed and we were unable to recover it. 00:28:01.166 [2024-07-12 19:20:03.636302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.636332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.636528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.636557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.636795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.636824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.636947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.636976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.637089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.637118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.637301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.637331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.637538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.637566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.637671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.637700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.637885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.637913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.638203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.638260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.638377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.638407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.638616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.638644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.638762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.638796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.638966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.638995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.639120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.639149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.639321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.639351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.639634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.639701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.639993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.640027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.640247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.640281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.640490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.640529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.640633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.640663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.640820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.640850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.641031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.641061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.641264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.641297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.641475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.641504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.641762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.641792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.641914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.641944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.642146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.642177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.642302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.642332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.642503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.642542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.642649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.642678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.642792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.642822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.642996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.643026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.643137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.643167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.643350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.643380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.643492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.643522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.643698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.643728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.643887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.643917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.644101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.644130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.644257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.644287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.644486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.167 [2024-07-12 19:20:03.644516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.167 qpair failed and we were unable to recover it. 00:28:01.167 [2024-07-12 19:20:03.644698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.644727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.644904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.644934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.645118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.645149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.645415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.645445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.645551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.645580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.645696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.645726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.645918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.645948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.646152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.646182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.646361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.646390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.646521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.646550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.646662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.646692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.646804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.646833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.646945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.646975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.647141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.647171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.647304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.647335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.647438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.647473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.647644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.647674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.647869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.647899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.648157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.648186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.648298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.648329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.648516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.648546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.648721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.648751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.648867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.648896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.649068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.649097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.649280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.649310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.649494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.649524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.649781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.649811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.649990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.650020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.650135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.650165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.650280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.650310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.650496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.650526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.650649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.650678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.650791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.650821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.651000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.651029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.651201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.651240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.651433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.651463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.651721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.651750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.651878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.651918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.652088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.652117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.652296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.168 [2024-07-12 19:20:03.652326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.168 qpair failed and we were unable to recover it. 00:28:01.168 [2024-07-12 19:20:03.652432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.652462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.652629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.652658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.652854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.652884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.653076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.653105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.653364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.653395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.653655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.653684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.653851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.653880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.654051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.654080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.654253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.654283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.654469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.654498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.654756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.654785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.654914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.654943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.655112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.655142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.655315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.655345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.655542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.655571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.655748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.655782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.655992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.656021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.656217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.656254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.656421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.656451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.656640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.656670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.656845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.656875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.656996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.657026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.657145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.657175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.657418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.657449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.657626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.657655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.657863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.657892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.658086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.658116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.658284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.658315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.658425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.658455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.658667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.658696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.658806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.658835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.659039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.659068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.659241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.659270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.659384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.659413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.659583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.659612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.659855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.659885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.660012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.660041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.660206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.660256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.660443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.660473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.660582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.660612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.660720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.660750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.660942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.169 [2024-07-12 19:20:03.660971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.169 qpair failed and we were unable to recover it. 00:28:01.169 [2024-07-12 19:20:03.661144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.661174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.661285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.661316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.661528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.661558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.661658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.661687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.661857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.661886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.662048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.662076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.662216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.662258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.662437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.662466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.662594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.662624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.662722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.662752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.662983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.663012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.663182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.663211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.663405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.663435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.663549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.663584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.663767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.663796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.663988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.664017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.664119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.664148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.664329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.664360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.664530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.664559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.664667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.664696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.664814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.664844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.665094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.665123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.665314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.665344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.665477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.665507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.665748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.665777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.665991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.666021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.666139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.666169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.666430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.666460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.666699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.666729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.666828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.666857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.666966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.666995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.667275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.667304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.667422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.667451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.667620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.667649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.667750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.667779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.668061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.668089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.668358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.668389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.668654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.170 [2024-07-12 19:20:03.668683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.170 qpair failed and we were unable to recover it. 00:28:01.170 [2024-07-12 19:20:03.668816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.171 [2024-07-12 19:20:03.668845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.171 qpair failed and we were unable to recover it. 00:28:01.171 [2024-07-12 19:20:03.668959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.171 [2024-07-12 19:20:03.668988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.171 qpair failed and we were unable to recover it. 00:28:01.171 [2024-07-12 19:20:03.669117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.171 [2024-07-12 19:20:03.669147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.171 qpair failed and we were unable to recover it. 00:28:01.451 [2024-07-12 19:20:03.669311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.451 [2024-07-12 19:20:03.669342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.451 qpair failed and we were unable to recover it. 00:28:01.451 [2024-07-12 19:20:03.669515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.451 [2024-07-12 19:20:03.669545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.451 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.669669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.669701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.669878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.669907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.670175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.670204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.670398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.670428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.670619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.670649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.670813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.670843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.671007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.671037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.671247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.671277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.671467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.671495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.671606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.671636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.671933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.671967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.672172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.672201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.672446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.672476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.672665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.672694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.672876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.672906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.673081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.673111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.673301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.673331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.673592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.673622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.673752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.673781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.673910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.673940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.674054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.674084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.674345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.674375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.674542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.674572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.674691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.674720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.674894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.674923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.675133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.675162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.675334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.675364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.675495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.675525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.675639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.675668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.675847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.675877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.676060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.676089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.676207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.452 [2024-07-12 19:20:03.676263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.452 qpair failed and we were unable to recover it. 00:28:01.452 [2024-07-12 19:20:03.676368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.676398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.676494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.676524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.676740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.676769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.676888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.676918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.677151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.677181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.677364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.677395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.677491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.677520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.677751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.677781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.678041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.678071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.678163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.678192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.678460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.678490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.678668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.678698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.678927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.678957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.679124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.679153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.679290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.679320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.679435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.679464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.679646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.679675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.679795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.679823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.680068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.680102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.680300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.680330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.680498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.680527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.680715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.680743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.680861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.680891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.681067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.681097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.681265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.681295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.681425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.681454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.681697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.681726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.681827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.681856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.682084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.682113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.682335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.682365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.682601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.453 [2024-07-12 19:20:03.682632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.453 qpair failed and we were unable to recover it. 00:28:01.453 [2024-07-12 19:20:03.682764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.682792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.683051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.683081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.683322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.683353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.683485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.683515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.683756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.683785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.683960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.683989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.684112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.684142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.684267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.684298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.684480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.684509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.684680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.684709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.684883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.684912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.685190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.685219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.685415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.685444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.685624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.685653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.685826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.685856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.686099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.686128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.686332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.686363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.686534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.686563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.686729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.686758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.686940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.686969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.687073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.687102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.687374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.687404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.687601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.687631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.687815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.687844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.687973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.688002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.688178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.688207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.688347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.688377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.688612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.688646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.688824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.688853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.689064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.689094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.689326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.689356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.689549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.454 [2024-07-12 19:20:03.689579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.454 qpair failed and we were unable to recover it. 00:28:01.454 [2024-07-12 19:20:03.689704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.689734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.689859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.689888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.690068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.690098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.690358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.690388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.690560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.690590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.690847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.690876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.691064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.691093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.691351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.691381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.691568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.691598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.691812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.691842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.692013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.692043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.692240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.692270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.692410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.692440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.692605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.692634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.692870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.692899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.693077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.693106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.693365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.693396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.693608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.693638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.693766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.693795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.693980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.694010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.694176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.694205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.694449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.694479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.694726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.694756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.695026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.695055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.695165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.695194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.695506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.695572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.695768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.695801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.695995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.696025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.696259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.696290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.455 qpair failed and we were unable to recover it. 00:28:01.455 [2024-07-12 19:20:03.696458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.455 [2024-07-12 19:20:03.696487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.696671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.696699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.696930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.696959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.697088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.697116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.697348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.697379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.697553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.697583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.697771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.697800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.697990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.698020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.698124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.698153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.698335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.698365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.698552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.698582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.698814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.698844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.699102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.699131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.699300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.699330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.699445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.699474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.699708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.699737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.699907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.699936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.700056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.700086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.700274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.700304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.700540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.700569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.700694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.700727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.700853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.700883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.701059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.701089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.701263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.701293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.701472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.701501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.701697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.701726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.701967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.701996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.702238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.702268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.702470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.456 [2024-07-12 19:20:03.702500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.456 qpair failed and we were unable to recover it. 00:28:01.456 [2024-07-12 19:20:03.702680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.702709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.702876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.702905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.703087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.703117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.703352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.703381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.703564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.703598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.703717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.703746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.703978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.704008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.704197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.704245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.704362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.704392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.704595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.704624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.704751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.704781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.704973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.705002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.705260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.705291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.705478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.705507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.705742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.705770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.705973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.706002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.706191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.706221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.706411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.706442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.706636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.706666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.706847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.706877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.457 [2024-07-12 19:20:03.706992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.457 [2024-07-12 19:20:03.707022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.457 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.707279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.707308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.707429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.707459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.707565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.707594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.707713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.707743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.707926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.707956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.708147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.708177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.708373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.708404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.708575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.708605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.708810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.708839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.709006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.709034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.709152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.709182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.709362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.709392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.709516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.709545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.709723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.709753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.709864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.709895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.710084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.710113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.710281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.710311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.710547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.710577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.710835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.710865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.710978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.711008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.711216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.711255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.711374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.711403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.711584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.711614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.711788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.711817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.711938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.711968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.712155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.712184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.712425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.712454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.712741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.712771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.712872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.712901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.713165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.458 [2024-07-12 19:20:03.713195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.458 qpair failed and we were unable to recover it. 00:28:01.458 [2024-07-12 19:20:03.713346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.713377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.713508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.713536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.713713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.713743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.714003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.714032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.714201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.714240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.714369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.714399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.714634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.714663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.714769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.714799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.714929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.714958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.715159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.715188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.715442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.715472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.715675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.715704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.715819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.715848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.716019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.716048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.716252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.716283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.716411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.716441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.716686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.716715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.716951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.716980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.717172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.717201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.717322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.717353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.717473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.717512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.717701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.717730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.717985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.718014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.718114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.718144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.718311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.718342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.718516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.718546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.718711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.718740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.718942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.718971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.719231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.719261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.719427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.719456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.719654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.719684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.719894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.719923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.720125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.459 [2024-07-12 19:20:03.720155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.459 qpair failed and we were unable to recover it. 00:28:01.459 [2024-07-12 19:20:03.720410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.720441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.720625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.720655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.720754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.720783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.720968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.720998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.721254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.721284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.721385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.721415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.721588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.721618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.721885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.721914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.722156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.722184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.722389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.722418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.722620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.722649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.722837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.722866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.723075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.723104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.723235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.723265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.723530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.723560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.723661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.723689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.723874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.723903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.724076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.724106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.724222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.724266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.724449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.724478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.724604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.724633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.724827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.724856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.725117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.725146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.725261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.725290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.725480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.725509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.725744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.725774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.726021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.726050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.726257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.726292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.726475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.726505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.726741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.726770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.726882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.460 [2024-07-12 19:20:03.726911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.460 qpair failed and we were unable to recover it. 00:28:01.460 [2024-07-12 19:20:03.727112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.727141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.727258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.727287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.727525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.727554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.727732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.727762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.727995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.728024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.728271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.728301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.728516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.728546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.728671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.728700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.728881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.728910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.729085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.729114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.729303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.729334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.729538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.729568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.729803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.729833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.730045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.730074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.730266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.730295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.730540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.730569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.730801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.730831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.730941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.730970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.731205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.731242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.731377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.731407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.731640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.731670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.731850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.731879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.732174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.732204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.732382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.732413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.732538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.732567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.732745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.732774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.732953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.732982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.733176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.733205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.733475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.733505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.733771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.733800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.733984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.734013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.734134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.734164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.734413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.734443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.734711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.734740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.735011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.461 [2024-07-12 19:20:03.735041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.461 qpair failed and we were unable to recover it. 00:28:01.461 [2024-07-12 19:20:03.735178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.735209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.735472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.735508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.735743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.735773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.735901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.735931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.736052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.736081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.736258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.736289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.736524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.736554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.736795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.736825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.736949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.736979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.737149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.737178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.737424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.737454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.737577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.737606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.737783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.737812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.737996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.738025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.738136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.738165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.738421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.738452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.738639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.738668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.738850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.738879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.739059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.739087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.739374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.739405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.739581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.739610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.739849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.739877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.740161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.740190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.740394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.740425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.740643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.740674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.740961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.740990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.741165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.741194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.741331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.741361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.741579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.741609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.741793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.741823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.741940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.462 [2024-07-12 19:20:03.741970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.462 qpair failed and we were unable to recover it. 00:28:01.462 [2024-07-12 19:20:03.742143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.742172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.742299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.742329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.742575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.742605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.742808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.742837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.743022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.743052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.743171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.743201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.743461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.743491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.743667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.743696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.743812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.743841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.744100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.744130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.744311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.744357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.744621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.744651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.744841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.744871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.745065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.745095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.745312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.745343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.745588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.745618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.745862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.745892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.746014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.746043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.746276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.746307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.746557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.746587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.746774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.746804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.747041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.747070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.747335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.747365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.747548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.747578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.747717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.747747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.463 qpair failed and we were unable to recover it. 00:28:01.463 [2024-07-12 19:20:03.748022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.463 [2024-07-12 19:20:03.748051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.748243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.748273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.748445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.748475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.748646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.748675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.748860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.748889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.749075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.749105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.749245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.749277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.749533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.749562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.749821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.749850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.750025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.750054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.750259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.750294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.750414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.750444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.750664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.750694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.750821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.750851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.751017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.751046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.751178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.751207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.751334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.751365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.751543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.751573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.751753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.751783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.751984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.752018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.752261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.752291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.752485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.752514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.752685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.752714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.752899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.752928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.753100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.753129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.753252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.753288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.753390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.753420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.753639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.753669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.753911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.753941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.754142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.754171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.754314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.754345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.754518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.754548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.754673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.754702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.464 qpair failed and we were unable to recover it. 00:28:01.464 [2024-07-12 19:20:03.754843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.464 [2024-07-12 19:20:03.754872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.755106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.755135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.755250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.755281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.755459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.755488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.755622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.755651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.755910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.755940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.756069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.756099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.756288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.756319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.756423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.756452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.756584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.756613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.756804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.756834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.757040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.757069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.757182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.757210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.757333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.757364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.757484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.757513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.757704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.757733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.757851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.757881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.757993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.758022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.758146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.758175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.758375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.758406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.758589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.758618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.758737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.758766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.758944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.758973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.759146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.759176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.759362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.759392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.759571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.759601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.759841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.759870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.760050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.760079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.760261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.760290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.760459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.760488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.760608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.760638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.760810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.465 [2024-07-12 19:20:03.760839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.465 qpair failed and we were unable to recover it. 00:28:01.465 [2024-07-12 19:20:03.760972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.761011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.761182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.761212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.761401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.761430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.761555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.761584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.761765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.761794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.762092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.762122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.762248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.762279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.762398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.762427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.762614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.762643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.762883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.762912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.763017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.763046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.763232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.763262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.763371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.763401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.763500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.763530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.763775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.763804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.763992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.764021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.764142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.764171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.764438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.764468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.764653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.764682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.764810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.764838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.765014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.765042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.765245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.765275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.765398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.765427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.765623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.765653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.765842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.765872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.765987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.766016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.766123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.766153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.766397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.766428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.766596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.766625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.766749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.766779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.767016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.466 [2024-07-12 19:20:03.767046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.466 qpair failed and we were unable to recover it. 00:28:01.466 [2024-07-12 19:20:03.767251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.767281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.767484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.767513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.767640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.767669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.767891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.767920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.768115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.768145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.768407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.768436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.768548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.768577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.768709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.768739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.768911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.768941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.769070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.769105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.769282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.769313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.769414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.769443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.769611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.769640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.769830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.769859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.770027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.770056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.770191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.770221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.770474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.770503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.770738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.770768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.770947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.770976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.771142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.771171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.771292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.771321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.771436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.771466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.771645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.771675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.771864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.771894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.772008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.772037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.772202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.772238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.772353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.772383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.772594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.772623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.772806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.772836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.772970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.467 [2024-07-12 19:20:03.773000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.467 qpair failed and we were unable to recover it. 00:28:01.467 [2024-07-12 19:20:03.773271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.773302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.773476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.773505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.773692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.773721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.773953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.773982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.774216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.774253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.774358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.774387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.774594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.774625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.774749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.774778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.774905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.774934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.775195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.775232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.775435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.775465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.775655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.775685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.775894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.775923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.776031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.776060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.776251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.776281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.776454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.776484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.776590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.776620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.776736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.776765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.776971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.777000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.777172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.777207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.777348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.777378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.777556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.777585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.468 qpair failed and we were unable to recover it. 00:28:01.468 [2024-07-12 19:20:03.777684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.468 [2024-07-12 19:20:03.777713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.777977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.778006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.778133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.778161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.778270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.778299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.778407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.778436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.778644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.778674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.778776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.778805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.778978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.779007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.779197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.779233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.779337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.779367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.779508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.779537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.779784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.779813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.779980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.780010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.780190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.780220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.780412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.780442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.780621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.780650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.780753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.780782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.780883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.780913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.781093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.781122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.781220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.781276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.781513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.781542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.781654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.781684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.781857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.781886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.782054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.782084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.782267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.782299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.782436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.782465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.782569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.782598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.782786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.782816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.782934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.782964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.783063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.783092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.469 qpair failed and we were unable to recover it. 00:28:01.469 [2024-07-12 19:20:03.783275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.469 [2024-07-12 19:20:03.783305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.783502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.783531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.783788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.783817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.783939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.783968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.784147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.784176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.784375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.784405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.784583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.784613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.784735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.784770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.785029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.785059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.785262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.785292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.785493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.785522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.785634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.785664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.785790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.785819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.786015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.786044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.786211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.786251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.786418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.786448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.786706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.786735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.786931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.786960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.787144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.787174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.787300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.787330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.787436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.787466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.787638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.787668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.787864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.787894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.788077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.788106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.788290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.788319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.788436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.788465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.788660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.788688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.788895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.788924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.789122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.789151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.789326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.789356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.789473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.789502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.470 qpair failed and we were unable to recover it. 00:28:01.470 [2024-07-12 19:20:03.789691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.470 [2024-07-12 19:20:03.789720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.789962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.789992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.790170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.790199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.790387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.790418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.790587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.790616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.790732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.790761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.791024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.791054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.791162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.791192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.791311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.791341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.791521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.791550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.791723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.791751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.791928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.791962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.792202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.792241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.792544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.792574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.792791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.792820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.792943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.792972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.793086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.793120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.793247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.793277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.793452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.793482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.793678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.793708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.793878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.793906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.794101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.794130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.794297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.794327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.794430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.794459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.794592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.794621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.794728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.794758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.794940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.794970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.795158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.795187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.795454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.795484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.795590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.795619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.471 [2024-07-12 19:20:03.795806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.471 [2024-07-12 19:20:03.795835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.471 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.795949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.795978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.796115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.796145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.796313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.796343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.796516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.796545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.796672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.796701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.796810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.796839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.796947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.796976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.797149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.797178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.797313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.797344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.797514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.797544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.797718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.797748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.797933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.797962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.798082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.798112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.798286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.798316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.798428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.798457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.798590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.798619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.798732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.798761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.798885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.798914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.799196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.799232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.799347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.799376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.799482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.799511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.799631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.799660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.799887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.799916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.800093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.800123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.800223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.800260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.800563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.800602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.800705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.800734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.800855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.800885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.801071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.801101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.801356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.801385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.801556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.801585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.801689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.801718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.801835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.472 [2024-07-12 19:20:03.801864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.472 qpair failed and we were unable to recover it. 00:28:01.472 [2024-07-12 19:20:03.802060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.802089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.802265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.802295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.802466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.802494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.802675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.802704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.802875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.802904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.803019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.803049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.803221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.803258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.803368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.803398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.803567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.803597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.803781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.803811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.803917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.803946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.804122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.804151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.804328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.804357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.804590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.804619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.804752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.804781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.804905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.804934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.805191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.805220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.805381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.805411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.805531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.805560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.805748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.805778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.805894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.805924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.806095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.806124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.806291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.806321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.806445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.806473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.806695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.806724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.806914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.806942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.807047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.807076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.807268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.807298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.807477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.807507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.807617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.807646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.807820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.807849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.808020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.473 [2024-07-12 19:20:03.808049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.473 qpair failed and we were unable to recover it. 00:28:01.473 [2024-07-12 19:20:03.808219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.808267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.808435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.808465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.808666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.808695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.808825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.808855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.808970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.808999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.809169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.809198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.809363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.809432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.809653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.809685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.809866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.809897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.810137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.810167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.810386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.810418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.810614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.810644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.810831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.810862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.811123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.811152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.811292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.811324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.811426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.811455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.811576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.811606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.811732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.811762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.811946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.811975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.812242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.812272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.812509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.812544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.812660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.812689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.812809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.812838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.813072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.474 [2024-07-12 19:20:03.813102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.474 qpair failed and we were unable to recover it. 00:28:01.474 [2024-07-12 19:20:03.813278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.813308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.813514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.813543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.813659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.813688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.813858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.813891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.814004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.814033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.814212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.814248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.814428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.814457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.814622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.814651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.814893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.814921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.815037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.815066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.815322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.815352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.815489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.815518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.815692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.815721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.815921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.815951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.816144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.816174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.816345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.816374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.816505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.816534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.816779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.816809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.816980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.817009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.817191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.817220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.817407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.817436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.817550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.817580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.817697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.817726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.817841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.817871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.818074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.818103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.818259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.818289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.818569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.818599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.818727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.818755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.818872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.818900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.819021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.819049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.819183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.819211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.819401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.819429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.819600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.819628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.819803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.819831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.820011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.820039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.475 [2024-07-12 19:20:03.820204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.475 [2024-07-12 19:20:03.820241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.475 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.820358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.820386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.820498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.820525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.820712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.820739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.820974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.821002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.821182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.821210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.821409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.821437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.821555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.821583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.821774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.821806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.821978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.822005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.822126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.822154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.822278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.822307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.822415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.822443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.822634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.822661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.822843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.822871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.823083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.823111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.823243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.823271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.823443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.823471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.823584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.823612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.823789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.823816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.823999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.824027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.824139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.824167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.824392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.824420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.824604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.824632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.824811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.824839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.825097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.825127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.825361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.825391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.825504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.825533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.825703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.825732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.825834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.825863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.826049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.476 [2024-07-12 19:20:03.826078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.476 qpair failed and we were unable to recover it. 00:28:01.476 [2024-07-12 19:20:03.826186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.826215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.826340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.826370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.826495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.826524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.826625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.826654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.826840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.826869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.826975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.827004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.827122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.827151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.827340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.827370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.827483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.827513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.827626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.827655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.827838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.827868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.828058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.828088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.828209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.828246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.828352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.828382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.828576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.828605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.828833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.828863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.828985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.829014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.829124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.829159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.829347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.829376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.829546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.829575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.829698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.829728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.829844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.829873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.829991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.830020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.830141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.830170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.830347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.830377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.830541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.830570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.830750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.830779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.831011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.831041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.831155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.831184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.831373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.831404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.831516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.831546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.477 qpair failed and we were unable to recover it. 00:28:01.477 [2024-07-12 19:20:03.831723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.477 [2024-07-12 19:20:03.831752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.831966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.831995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.832109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.832139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.832422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.832453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.832648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.832676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.832786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.832815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.832931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.832959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.833119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.833149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.833256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.833286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.833472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.833502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.833609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.833638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.833755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.833784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.833886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.833915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.834084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.834114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.834285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.834314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.834482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.834511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.834631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.834660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.834847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.834877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.835137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.835166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.835363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.835393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.835629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.835658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.835762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.835792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.835982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.836011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.836193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.836222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.836418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.836448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.836565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.836593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.836784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.836818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.836985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.837014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.837214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.837264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.837450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.837479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.837596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.478 [2024-07-12 19:20:03.837625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.478 qpair failed and we were unable to recover it. 00:28:01.478 [2024-07-12 19:20:03.837764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.837793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.837960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.837989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.838092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.838121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.838248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.838279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.838461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.838490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.838691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.838720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.838914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.838942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.839062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.839091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.839217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.839255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.839445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.839474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.839658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.839688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.839818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.839848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.839962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.839991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.840169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.840199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.840424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.840492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.840630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.840664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.840859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.840889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.841072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.841102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.841334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.841367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.841628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.841657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.841822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.841851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.841956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.841986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.842172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.842203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.842401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.842432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.842557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.842587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.842795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.842825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.479 qpair failed and we were unable to recover it. 00:28:01.479 [2024-07-12 19:20:03.843008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.479 [2024-07-12 19:20:03.843036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.843235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.843265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.843435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.843464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.843657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.843686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.843787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.843816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.843937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.843965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.844146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.844175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.844464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.844495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.844676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.844705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.844896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.844931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.845099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.845129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.845314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.845345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.845522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.845552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.845727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.845757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.845923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.845952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.846064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.846093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.846206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.846244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.846427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.846457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.846576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.846606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.846775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.846804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.846998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.847027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.847138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.847167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.847410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.847440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.847633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.847663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.847767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.847797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.847967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.847996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.848174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.848203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.848484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.848515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.848684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.848713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.848897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.848927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.849119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.849148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.849321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.480 [2024-07-12 19:20:03.849353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.480 qpair failed and we were unable to recover it. 00:28:01.480 [2024-07-12 19:20:03.849561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.849589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.849689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.849718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.849889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.849918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.850084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.850114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.850360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.850391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.850563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.850592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.850825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.850855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.851024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.851053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.851317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.851346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.851529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.851559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.851803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.851832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.851959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.851988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.852216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.852253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.852437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.852466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.852631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.852661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.852845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.852875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.853046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.853076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.853254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.853291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.853479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.853509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.853629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.853658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.853836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.853865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.854103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.854132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.854318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.854348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.854459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.854489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.854619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.854648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.854907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.854935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.855047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.855077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.855248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.855278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.855483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.481 [2024-07-12 19:20:03.855512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.481 qpair failed and we were unable to recover it. 00:28:01.481 [2024-07-12 19:20:03.855718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.855748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.855928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.855957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.856198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.856237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.856417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.856447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.856622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.856651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.856917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.856947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.857057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.857086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.857199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.857240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.857476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.857506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.857625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.857655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.857786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.857815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.857940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.857969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.858155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.858184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.858383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.858414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.858591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.858619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.858741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.858771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.858949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.858979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.859090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.859119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.859327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.859357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.859524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.859554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.859721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.859751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.859870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.859900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.860070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.860099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.860295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.860325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.860514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.860543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.860727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.860756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.860925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.860954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.861084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.861114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.861348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.861384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.482 [2024-07-12 19:20:03.861499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.482 [2024-07-12 19:20:03.861528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.482 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.861762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.861791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.861978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.862008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.862249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.862279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.862465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.862494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.862661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.862690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.862949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.862979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.863223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.863263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.863377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.863407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.863539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.863568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.863740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.863768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.864029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.864058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.864301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.864331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.864595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.864625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.864794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.864823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.865068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.865097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.865275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.865305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.865540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.865570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.865841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.865870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.866083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.866113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.866298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.866328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.866504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.866533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.866715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.866744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.866945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.866975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.867158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.867187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.867458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.867488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.867614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.867644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.867743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.867772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.867984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.868014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.868212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.868253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.868376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.868406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.868608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.483 [2024-07-12 19:20:03.868638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.483 qpair failed and we were unable to recover it. 00:28:01.483 [2024-07-12 19:20:03.868750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.868779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.868908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.868938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.869174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.869204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.869504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.869535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.869646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.869675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.869873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.869902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.870163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.870192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.870336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.870372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.870572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.870602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.870873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.870903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.871137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.871166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.871354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.871386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.871520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.871549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.871807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.871837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.872047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.872077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.872320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.872350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.872610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.872639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.872805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.872834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.873029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.873058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.873184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.873213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.873409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.873438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.873618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.873648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.873761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.873790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.874046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.874074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.874199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.874236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.874497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.874527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.874642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.874671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.874849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.874878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.875142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.875171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.875373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.875403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.875601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.875631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.875761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.484 [2024-07-12 19:20:03.875790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.484 qpair failed and we were unable to recover it. 00:28:01.484 [2024-07-12 19:20:03.875901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.875930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.876189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.876218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.876350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.876380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.876548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.876577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.876706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.876736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.876859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.876888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.877071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.877100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.877272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.877303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.877564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.877593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.877776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.877805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.877937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.877966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.878144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.878173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.878297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.878328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.878516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.878544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.878708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.878737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.878915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.878950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.879187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.879216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.879353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.879382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.879551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.879581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.879734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.879764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.879971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.880000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.880213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.880250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.880375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.880405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.880527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.880557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.880682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.880713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.880952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.880981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.485 [2024-07-12 19:20:03.881157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.485 [2024-07-12 19:20:03.881186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.485 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.881431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.881462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.881584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.881613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.881828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.881857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.882052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.882081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.882346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.882377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.882564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.882594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.882715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.882744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.882919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.882948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.883154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.883183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.883444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.883474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.883708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.883738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.883932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.883962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.884193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.884223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.884416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.884446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.884686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.884715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.884869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.884900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.885156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.885185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.885469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.885500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.885777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.885807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.886006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.886036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.886300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.886330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.886599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.886628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.886855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.886885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.887141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.887170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.887362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.887392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.887655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.887684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.887897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.887926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.888168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.888197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.888318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.888353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.888607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.888636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.888814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.888843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.889027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.889057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.889265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.889296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.889510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.889539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.486 [2024-07-12 19:20:03.889667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.486 [2024-07-12 19:20:03.889696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.486 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.889957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.889986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.890296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.890326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.890555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.890584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.890823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.890852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.891110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.891138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.891349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.891379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.891623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.891652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.891785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.891814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.892057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.892086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.892338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.892368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.892624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.892654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.892831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.892860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.892992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.893022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.893247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.893277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.893473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.893502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.893736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.893765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.893968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.893997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.894259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.894288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.894424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.894454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.894694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.894723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.894884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.894952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.895211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.895254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.895495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.895526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.895784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.895814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.895994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.896024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.896148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.896178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.896480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.896511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.896747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.896777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.896896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.896925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.897105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.897143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.487 qpair failed and we were unable to recover it. 00:28:01.487 [2024-07-12 19:20:03.897255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.487 [2024-07-12 19:20:03.897285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.897499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.897528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.897697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.897726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.898013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.898051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.898297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.898328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.898511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.898541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.898752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.898781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.898957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.898987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.899202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.899239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.899519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.899549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.899845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.899874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.900063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.900092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.900270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.900300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.900559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.900589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.900759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.900788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.901071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.901101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.901371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.901400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.901692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.901722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.901917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.901947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.902234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.902265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.902458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.902488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.902695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.902724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.902979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.903008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.903266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.903296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.903574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.903604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.903888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.903918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.904173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.904203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.904493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.904524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.904786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.904816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.905016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.905046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.905302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.905334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.488 qpair failed and we were unable to recover it. 00:28:01.488 [2024-07-12 19:20:03.905570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.488 [2024-07-12 19:20:03.905600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.905783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.905813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.906094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.906123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.906308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.906338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.906525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.906555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.906826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.906855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.907043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.907073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.907184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.907213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.907410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.907441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.907700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.907729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.908024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.908053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.908348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.908378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.908567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.908603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.908844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.908873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.909001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.909030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.909241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.909272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.909469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.909499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.909632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.909661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.909894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.909924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.910202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.910237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.910488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.910518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.910776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.910806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.910929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.910958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.911221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.911262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.911482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.911512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.911679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.911709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.911927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.911957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.912214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.912252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.912507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.912537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.912712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.912741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.913026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.913056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.913317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.913348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.913593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.913623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.913741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.913771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.914025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.489 [2024-07-12 19:20:03.914055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.489 qpair failed and we were unable to recover it. 00:28:01.489 [2024-07-12 19:20:03.914295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.914326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.914595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.914624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.914860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.914889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.915092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.915122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.915334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.915365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.915596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.915626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.915857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.915887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.916143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.916173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.916464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.916494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.916687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.916717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.916957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.916986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.917178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.917208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.917480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.917510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.917792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.917821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.918110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.918140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.918403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.918435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.918622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.918652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.918836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.918870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.919037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.919067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.919358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.919388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.919558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.919588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.919775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.919805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.920045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.920075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.920276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.920307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.920571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.490 [2024-07-12 19:20:03.920601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.490 qpair failed and we were unable to recover it. 00:28:01.490 [2024-07-12 19:20:03.920844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.920874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.921136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.921165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.921401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.921432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.921692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.921721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.921937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.921967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.922083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.922113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.922254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.922285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.922472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.922502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.922735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.922764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.923055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.923084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.923347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.923377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.923646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.923676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.923916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.923946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.924240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.924270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.924535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.924564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.924689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.924718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.924905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.924935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.925197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.925248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.925420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.925449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.925591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.925622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.925743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.925773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.925954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.925983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.926165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.926194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.926385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.926416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.926664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.926693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.926949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.926979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.927237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.927267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.927524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.927553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.927734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.927763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.927997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.928026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.928277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.928308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.928559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.928589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.928779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.928809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.491 [2024-07-12 19:20:03.929008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.491 [2024-07-12 19:20:03.929037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.491 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.929219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.929272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.929456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.929486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.929673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.929702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.929890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.929920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.930186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.930215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.930438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.930468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.930753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.930783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.931032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.931062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.931327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.931357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.931596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.931626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.931830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.931860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.932057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.932086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.932364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.932396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.932681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.932711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.932986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.933016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.933272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.933303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.933404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.933433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.933620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.933649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.933883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.933913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.934150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.934179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.934418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.934449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.934710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.934739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.935037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.935066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.935340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.935371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.935611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.935641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.935873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.935908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.936168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.936198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.936492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.936523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.936769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.936798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.937040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.937069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.937338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.937370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.937572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.937601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.937850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.937879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.938165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.938195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.938483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.938514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.938779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.938808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.938997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.492 [2024-07-12 19:20:03.939026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.492 qpair failed and we were unable to recover it. 00:28:01.492 [2024-07-12 19:20:03.939285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.939315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.939500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.939529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.939722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.939752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.939957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.939986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.940250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.940280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.940532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.940562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.940816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.940845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.941106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.941136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.941345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.941376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.941634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.941664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.941851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.941880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.942050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.942079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.942288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.942319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.942558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.942588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.942760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.942789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.943009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.943039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.943239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.943268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.943546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.943576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.943822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.943852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.944087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.944116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.944242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.944273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.944447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.944476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.944671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.944700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.944978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.945007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.945264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.945295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.945574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.945605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.945872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.945902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.946175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.946204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.946477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.946513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.946701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.946730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.947015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.947044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.947287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.947317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.947566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.947596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.947834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.947863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.948124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.948153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.493 [2024-07-12 19:20:03.948462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.493 [2024-07-12 19:20:03.948492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.493 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.948749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.948779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.948984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.949014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.949202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.949243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.949512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.949541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.949673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.949703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.949965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.949994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.950289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.950320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.950493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.950522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.950783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.950812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.951101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.951130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.951409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.951441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.951713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.951743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.951986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.952015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.952277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.952323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.952500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.952531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.952660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.952690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.952951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.952980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.953275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.953307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.953502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.953533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.953820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.953850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.954142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.954171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.954379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.954410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.954678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.954708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.954831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.954860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.955098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.955127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.955300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.955331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.955625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.955656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.955935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.955964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.956200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.956238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.956410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.956440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.956699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.956728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.957011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.957041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.957313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.957350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.957629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.957659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.957829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.957859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.958117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.958147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.958439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.494 [2024-07-12 19:20:03.958470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-07-12 19:20:03.958746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.958775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.958947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.958977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.959164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.959194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.959374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.959406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.959645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.959675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.959879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.959909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.960093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.960123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.960297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.960328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.960511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.960541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.960826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.960856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.961149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.961179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.961474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.961504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.961746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.961776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.962019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.962048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.962240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.962271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.962389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.962419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.962605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.962634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.962820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.962849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.963119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.963149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.963267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.963297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.963488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.963517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.963697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.963726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.963930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.963960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.964253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.964285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.964558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.964587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.964878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.964907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.965122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.965151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.965292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.965323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.965533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.965562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-07-12 19:20:03.965832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.495 [2024-07-12 19:20:03.965862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.966109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.966139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.966403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.966434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.966734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.966763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.966958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.966987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.967193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.967222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.967474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.967510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.967718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.967747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.967986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.968016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.968202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.968242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.968427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.968457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.968722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.968751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.968924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.968954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.969222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.969263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.969532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.969562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.969847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.969877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.970151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.970181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.970469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.970500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.970778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.970807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.971100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.971130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.971347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.971379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.971620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.971650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.971921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.971951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.972140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.972169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.972434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.972465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.972747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.972777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.973061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.973090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.973284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.973314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.973581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.973610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.973864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.973893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.974099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.974128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.974399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.974431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.974623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.974652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.974876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.974907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.975158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.975188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.975390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.975421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-07-12 19:20:03.975684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.496 [2024-07-12 19:20:03.975714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.975888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.975917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.976183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.976213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.976417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.976447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.976634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.976663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.976843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.976872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.977139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.977169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.977455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.977487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.977684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.977713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.977970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.978000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.978187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.978221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.978494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.978525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.978792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.978821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.979061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.979090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.979360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.979391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.979527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.979557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.979822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.979852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.980146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.980174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.980449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.980480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.980724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.980754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.981008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.981037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.981290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.981321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.981562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.981592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.981764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.981793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.982088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.982118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.982330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.982361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.982629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.982659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.982849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.982879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.983129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.983158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.983347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.983379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.983568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.983597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.983738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.983768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.984071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.984100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.984210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.984248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.984435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.984465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.984730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.984760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.984947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.984977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.985274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.985305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.985490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.985519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.985722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.497 [2024-07-12 19:20:03.985751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.497 qpair failed and we were unable to recover it. 00:28:01.497 [2024-07-12 19:20:03.985945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.985974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.986287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.986318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.986592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.986621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.986908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.986938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.987218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.987259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.987561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.987591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.987853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.987882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.988139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.988168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.988469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.988500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.988768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.988798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.988916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.988951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.989216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.989265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.989463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.989492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.989676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.989705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.989983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.990012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.990217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.990259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.990486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.990516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.990787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.990817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.991057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.991087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.991262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.991293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.991545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.991574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.991815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.991845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.992017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.992046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.992294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.992326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.992553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.992583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.992783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.992814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.993072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.993102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.993371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.993402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.993648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.993678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.993888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.993918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.994130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.994161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.994340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.994371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 464601 Killed "${NVMF_APP[@]}" "$@" 00:28:01.498 [2024-07-12 19:20:03.994646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.994676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.994870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.994918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.995151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 19:20:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:01.498 [2024-07-12 19:20:03.995181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 [2024-07-12 19:20:03.995404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.995435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 19:20:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:01.498 [2024-07-12 19:20:03.995650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.995681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 19:20:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:01.498 [2024-07-12 19:20:03.995949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.995980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 19:20:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:01.498 [2024-07-12 19:20:03.996187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.498 [2024-07-12 19:20:03.996218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.498 qpair failed and we were unable to recover it. 00:28:01.498 19:20:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.499 [2024-07-12 19:20:03.996523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.499 [2024-07-12 19:20:03.996554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.499 qpair failed and we were unable to recover it. 00:28:01.499 [2024-07-12 19:20:03.996820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.499 [2024-07-12 19:20:03.996849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.499 qpair failed and we were unable to recover it. 00:28:01.499 [2024-07-12 19:20:03.996995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.499 [2024-07-12 19:20:03.997024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.499 qpair failed and we were unable to recover it. 00:28:01.499 [2024-07-12 19:20:03.997276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.499 [2024-07-12 19:20:03.997309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.499 qpair failed and we were unable to recover it. 00:28:01.499 [2024-07-12 19:20:03.997584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.499 [2024-07-12 19:20:03.997615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.499 qpair failed and we were unable to recover it. 00:28:01.779 [2024-07-12 19:20:03.997907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.779 [2024-07-12 19:20:03.997940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.779 qpair failed and we were unable to recover it. 00:28:01.779 [2024-07-12 19:20:03.998246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.779 [2024-07-12 19:20:03.998280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.779 qpair failed and we were unable to recover it. 00:28:01.779 [2024-07-12 19:20:03.998478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.779 [2024-07-12 19:20:03.998508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.779 qpair failed and we were unable to recover it. 00:28:01.779 [2024-07-12 19:20:03.998707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.779 [2024-07-12 19:20:03.998740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.779 qpair failed and we were unable to recover it. 00:28:01.779 [2024-07-12 19:20:03.999015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.779 [2024-07-12 19:20:03.999046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.779 qpair failed and we were unable to recover it. 00:28:01.779 [2024-07-12 19:20:03.999275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.779 [2024-07-12 19:20:03.999310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.779 qpair failed and we were unable to recover it. 00:28:01.779 [2024-07-12 19:20:03.999440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:03.999469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:03.999650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:03.999679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:03.999885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:03.999914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.000112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.000141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.000338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.000369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.000626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.000658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.000796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.000825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.000950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.000979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.001255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.001286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.001401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.001430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.001623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.001653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.001906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.001942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.002140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.002171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.002430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.002462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.002585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.002615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.002814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.002844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.003028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.003058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.003260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.003292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.003488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.003523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=465396 00:28:01.780 [2024-07-12 19:20:04.003674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.003706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.003904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 465396 00:28:01.780 [2024-07-12 19:20:04.003936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:01.780 [2024-07-12 19:20:04.004214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.004259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 465396 ']' 00:28:01.780 [2024-07-12 19:20:04.004464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.004496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.780 [2024-07-12 19:20:04.004640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.004675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:01.780 [2024-07-12 19:20:04.004936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.004969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.780 [2024-07-12 19:20:04.005158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.005191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:01.780 [2024-07-12 19:20:04.005492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.005527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.780 [2024-07-12 19:20:04.005804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.005837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.006129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.006159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.006361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.006393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.006714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.006744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.006993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.007023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.007263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.007294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.007507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.007546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.007759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.007790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.007927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.780 [2024-07-12 19:20:04.007956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.780 qpair failed and we were unable to recover it. 00:28:01.780 [2024-07-12 19:20:04.008279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.008313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.008471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.008501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.008656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.008686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.008908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.008939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.009134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.009163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.009319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.009350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.009552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.009582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.009853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.009882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.010031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.010061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.010252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.010286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.010485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.010515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.010775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.010805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.011021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.011051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.011269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.011301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.011550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.011581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.011839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.011869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.012116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.012151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.012432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.012466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.012715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.012745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.012947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.012978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.013166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.013196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.013504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.013581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.013844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.013880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.014159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.014190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.014350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.014385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.014596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.014627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.014780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.014811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.015004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.015035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.015272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.015303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.015492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.015523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.015771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.015801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.016095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.016127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.016377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.016410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.016620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.016650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.016940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.016971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.017112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.017145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.017340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.017371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.017601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.017637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.017791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.781 [2024-07-12 19:20:04.017822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.781 qpair failed and we were unable to recover it. 00:28:01.781 [2024-07-12 19:20:04.017944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.017973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.018167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.018197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.018485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.018516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.018720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.018750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.019077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.019107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.019370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.019402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.019602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.019631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.019822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.019851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.020055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.020086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.020311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.020343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.020463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.020493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.020710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.020739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.020959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.020991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.021195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.021255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.021456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.021489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.021617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.021647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.021899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.021929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.022141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.022171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.022372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.022403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.022585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.022615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.022800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.022830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.023041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.023071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.023293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.023327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.023529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.023559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.023743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.023773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.023981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.024012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.024268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.024301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.024548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.024579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.024698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.024727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.024864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.024894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.025205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.025246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.025529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.025560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.025711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.025740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.025928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.025958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.026238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.026269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.026526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.026556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.026719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.026749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.026888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.026919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.027104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.027140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.027336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.027367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.782 [2024-07-12 19:20:04.027511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.782 [2024-07-12 19:20:04.027541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.782 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.027733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.027763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.028046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.028076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.028259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.028289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.028487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.028519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.028731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.028761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.028948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.028978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.029097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.029127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.029323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.029354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.029552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.029583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.029762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.029792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.029997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.030027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.030302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.030335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.030565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.030596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.030875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.030906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.031136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.031167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.031452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.031483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.031608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.031639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.031845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.031876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.031996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.032026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.032163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.032192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.032407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.032444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.032655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.032684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.032987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.033017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.033145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.033175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.033464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.033496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.033649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.033679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.033817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.033848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.033984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.034014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.034139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.034169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.034385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.034416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.034716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.034746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.034930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.034961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.035247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.035277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.035409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.035439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.035630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.035659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.035842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.035871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.036000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.036029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.036168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.036197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.036426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.036457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.036593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.036623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.783 qpair failed and we were unable to recover it. 00:28:01.783 [2024-07-12 19:20:04.036739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.783 [2024-07-12 19:20:04.036769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.037022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.037052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.037172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.037202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.037440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.037474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.037737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.037767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.037906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.037935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.038051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.038081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.038362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.038394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.038575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.038605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.038715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.038744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.038879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.038910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.039121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.039153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.039340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.039372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.039575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.039606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.039733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.039762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.039963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.039995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.040256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.040288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.040423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.040453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.040647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.040678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.040858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.040889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.041141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.041171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.041334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.041367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.041511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.041540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.041647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.041675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.041911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.041954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.042141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.042172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.042479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.042512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.042750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.042780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.042976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.043007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.043203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.043244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.043359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.043390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.043576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.043605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.784 [2024-07-12 19:20:04.043857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.784 [2024-07-12 19:20:04.043886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.784 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.044081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.044110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.044298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.044329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.044462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.044491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.044690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.044721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.044938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.044968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.045083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.045113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.045316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.045347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.045628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.045657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.045855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.045884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.046073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.046102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.046279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.046311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.046447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.046477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.046599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.046628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.046823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.046853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.047055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.047086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.047287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.047318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.047537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.047567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.047745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.047775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.047907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.047937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.048183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.048214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.048451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.048482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.048599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.048628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.048752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.048781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.048952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.048981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.049278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.049311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.049455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.049486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.049604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.049634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.049756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.049786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.049998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.050027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.050206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.050254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.050430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.050460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.050635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.050671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.050792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.050822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.050998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.051027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.051148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.051178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.051366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.051398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.051671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.051700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.051889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.051918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.052040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.052069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.052249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.052279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.785 qpair failed and we were unable to recover it. 00:28:01.785 [2024-07-12 19:20:04.052475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.785 [2024-07-12 19:20:04.052505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.052684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.052714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.052894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.052923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.053112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.053142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.053329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.053361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.053485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.053515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.053632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.053662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.053842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.053872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.054048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.054077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.054203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.054244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.054429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.054461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.054576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.054606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.054755] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:28:01.786 [2024-07-12 19:20:04.054811] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.786 [2024-07-12 19:20:04.054904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.054939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.055207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.055246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.055437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.055467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.055764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.055795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.055918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.055948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.056139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.056170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.056392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.056425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.056565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.056601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.056718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.056750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.056959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.056991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.057183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.057214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.057351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.057384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.057532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.057563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.057694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.057725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.057919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.057950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.058239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.058272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.058424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.058455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.058586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.058618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.058736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.058769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.058957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.058989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.059183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.059214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.059460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.059490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.059783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.059813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.059926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.059956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.060140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.060181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.060384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.060415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.060698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.060728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.786 [2024-07-12 19:20:04.060903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.786 [2024-07-12 19:20:04.060933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.786 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.061061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.061091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.061380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.061413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.061598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.061628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.061787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.061823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.062080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.062111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.062255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.062286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.062465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.062495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.062615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.062647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.062832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.062862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.062975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.063005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.063293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.063326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.063580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.063610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.063792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.063823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.063949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.063980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.064098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.064129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.064317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.064348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.064572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.064602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.064732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.064762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.064893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.064924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.065050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.065081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.065216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.065261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.065457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.065490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.065687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.065717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.065839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.065869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.066003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.066033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.066147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.066178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.066310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.066342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.066524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.066555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.066754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.066785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.066914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.066943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.067080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.067111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.067284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.067316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.067454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.067484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.067687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.067716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.067908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.067938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.068112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.068142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.068333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.068365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.068605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.068635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.068756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.068786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.068911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.068941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.787 qpair failed and we were unable to recover it. 00:28:01.787 [2024-07-12 19:20:04.069045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.787 [2024-07-12 19:20:04.069075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.069194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.069246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.069493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.069523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.069666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.069702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.069824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.069854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.070100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.070130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.070414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.070446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.070573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.070604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.070852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.070882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.071083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.071114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.071238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.071271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.071384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.071416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.071533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.071564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.071692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.071721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.071847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.071878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.072068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.072098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.072275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.072306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.072499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.072529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.072705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.072735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.072869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.072900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.073034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.073066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.073267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.073298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.073426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.073458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.073578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.073608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.073824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.073854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.074041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.074072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.074387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.074418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.074553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.074584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.074761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.074791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.074982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.075011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.075131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.075162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.075441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.075472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.075628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.075658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.075835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.075868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.788 [2024-07-12 19:20:04.076045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.788 [2024-07-12 19:20:04.076075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.788 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.076278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.076310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.076430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.076466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.076661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.076692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.076797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.076826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.076968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.077003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.077135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.077165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.077393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.077425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.077621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.077651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.077757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.077797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.077923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.077954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.078134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.078165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.078353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.078386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.078585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.078615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.078737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.078767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.079022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.079053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.079253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.079285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.079473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.079504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.079613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.079643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.079847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.079877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.080083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.080114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.080240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.080273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.080525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.080556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.080696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.080728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.080846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.080876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.080999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.081029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.081275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.081307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.081581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.081612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.081794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.081825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.082035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.082066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.082281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.082314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.082439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.082470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.082666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.082698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.082820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.082852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.082963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.082992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.083205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.083247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.083490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.083562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.083774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.083809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.083945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.083978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.084100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.084129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.084357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.084389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.789 qpair failed and we were unable to recover it. 00:28:01.789 [2024-07-12 19:20:04.084574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.789 [2024-07-12 19:20:04.084604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.084845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.084875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.085120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.085149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.085404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.085434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.790 [2024-07-12 19:20:04.085541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.085573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.085786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.085815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.086017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.086046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.086220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.086260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.086380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.086410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.086539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.086568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.086739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.086769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.087007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.087037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.087237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.087267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.087395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.087425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.087602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.087632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.087835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.087865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.088053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.088086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.088195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.088237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.088436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.088465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.088647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.088679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.088794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.088822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.088940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.088972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.089278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.089313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.089513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.089543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.089719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.089750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.089873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.089902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.090096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.090125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.090408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.090439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.090568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.090597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.090797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.090826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.091006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.091035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.091157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.091189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.091439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.091469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.091615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.091645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.091859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.091889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.092075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.092104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.092297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.092327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.092522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.092551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.092725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.092755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.092925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.092955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.093125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.790 [2024-07-12 19:20:04.093156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.790 qpair failed and we were unable to recover it. 00:28:01.790 [2024-07-12 19:20:04.093361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.093392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.093566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.093595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.093776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.093806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.093919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.093948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.094152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.094181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.094457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.094489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.094675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.094705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.094943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.094972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.095156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.095187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.095380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.095412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.095537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.095567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.095685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.095715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.095904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.095933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.096066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.096095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.096282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.096313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.096435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.096465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.096576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.096605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.096794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.096824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.096934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.096963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.097141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.097170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.097366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.097398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.097518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.097553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.097797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.097826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.098084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.098113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.098242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.098274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.098403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.098433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.098637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.098666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.098767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.098797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.098973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.099002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.099266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.099297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.099468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.099498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.099679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.099708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.099889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.099918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.100032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.100062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.100244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.100275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.100470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.100500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.100686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.100715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.100841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.100871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.100998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.101028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.101271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.791 [2024-07-12 19:20:04.101303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.791 qpair failed and we were unable to recover it. 00:28:01.791 [2024-07-12 19:20:04.101488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.101517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.101646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.101693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.101871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.101901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.102089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.102118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.102313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.102344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.102456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.102486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.102757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.102787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.102965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.102994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.103246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.103277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.103457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.103487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.103740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.103769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.103952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.103982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.104106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.104135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.104389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.104420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.104634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.104664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.104841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.104871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.105108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.105137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.105274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.105306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.105552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.105582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.105820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.105849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.105982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.106011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.106184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.106219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.106430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.106460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.106561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.106590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.106773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.106803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.107010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.107039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.107146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.107176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.107426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.107458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.107585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.107614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.107738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.107767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.107938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.107968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.108148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.108177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.108314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.108346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.108533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.108562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.792 qpair failed and we were unable to recover it. 00:28:01.792 [2024-07-12 19:20:04.108831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.792 [2024-07-12 19:20:04.108861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.109037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.109067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.109198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.109248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.109430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.109460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.109633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.109663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.109794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.109823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.109988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.110017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.110191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.110220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.110438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.110469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.110649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.110679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.110972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.111003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.111122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.111152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.111323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.111355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.111589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.111618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.111730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.111760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.111932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.111962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.112153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.112183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.112308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.112338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.112606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.112636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.112831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.112860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.113040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.113070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.113246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.113277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.113514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.113543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.113718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.113747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.113997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.114026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.114208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.114247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.114433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.114463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.114630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.114664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.114919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.114949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.115170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.115200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.115362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.115431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.115581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.115614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.115830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.115860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.116115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.116145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.116315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.116347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.116609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.116639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.116877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.116907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.117013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.117042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.117269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.117303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.117429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.117459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.117696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.793 [2024-07-12 19:20:04.117726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.793 qpair failed and we were unable to recover it. 00:28:01.793 [2024-07-12 19:20:04.117922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.117952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.118235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.118267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.118536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.118565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.118676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.118705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.118862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.118892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.119134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.119163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.119334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.119364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.119484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.119514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.119703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.119732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.119968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.119998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.120123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.120159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.120276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.120310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.120434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.120463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.120578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.120615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.120721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.120750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.120877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.120906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.121084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.121113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.121353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.121383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.121487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.121517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.121690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.121720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.121841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.121870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.122039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.122067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.122308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.122338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.122471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.122500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.122599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.122627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.122808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.122838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.123018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.123047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.123241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.123272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.123550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.123580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.123761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.123790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.123970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.123999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.124252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.124283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.124389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.124418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.124668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.124697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.124809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.124838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.125010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.125039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.125297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.125329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.125517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.125546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.125662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.125691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.125972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.126001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.126135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.126165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.794 qpair failed and we were unable to recover it. 00:28:01.794 [2024-07-12 19:20:04.126366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.794 [2024-07-12 19:20:04.126396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.126578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.126608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.126892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.126922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.127110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.127139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.127262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.127292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.127481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.127511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.127683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.127712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.127907] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:01.795 [2024-07-12 19:20:04.127971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.128000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.128130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.128160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.128426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.128456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.128570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.128599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.128849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.128879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.129061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.129090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.129240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.129270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.129372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.129401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.129667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.129696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.129875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.129904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.130028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.130056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.130240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.130281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.130551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.130581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.130769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.130799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.131020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.131050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.131251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.131280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.131416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.131444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.131669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.131698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.131823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.131852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.132045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.132074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.132366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.132397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.132596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.132625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.132876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.132905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.133097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.133128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.133252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.133283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.133413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.133443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.133625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.133654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.133914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.133944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.134047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.134078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.134340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.134371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.134640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.134670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.134783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.134812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.134942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.134971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.135247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.795 [2024-07-12 19:20:04.135316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.795 qpair failed and we were unable to recover it. 00:28:01.795 [2024-07-12 19:20:04.135532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.135567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.135763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.135794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.136032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.136062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.136241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.136273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.136379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.136409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.136578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.136609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.136849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.136879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.137057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.137087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.137253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.137284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.137472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.137503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.137697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.137727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.137963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.137993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.138109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.138155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.138361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.138392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.138577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.138607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.138872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.138901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.139015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.139044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.139217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.139258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.139378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.139407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.139508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.139539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.139639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.139670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.139847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.139877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.140056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.140087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.140189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.140220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.140434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.140465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.140580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.140610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.140734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.140764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.140948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.140978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.141162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.141192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.141411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.141442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.141650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.141680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.141848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.141877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.142079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.142108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.142281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.142312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.142551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.142581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.142770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.142799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.143032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.143061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.143238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.143271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.143508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.143539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.143750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.143785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.143897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.796 [2024-07-12 19:20:04.143927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.796 qpair failed and we were unable to recover it. 00:28:01.796 [2024-07-12 19:20:04.144171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.144201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.144335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.144366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.144484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.144513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.144718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.144748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.144864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.144893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.145128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.145158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.145425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.145456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.145712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.145742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.145951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.145981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.146191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.146220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.146396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.146426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.146604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.146633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.146851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.146882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.147009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.147039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.147273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.147303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.147548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.147578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.147701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.147730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.147913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.147943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.148237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.148269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.148503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.148532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.148703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.148733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.148991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.149021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.149193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.149223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.149472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.149502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.149700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.149729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.149911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.149941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.150052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.150082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.150314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.150345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.150588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.150618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.150787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.150816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.150997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.151027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.151154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.151183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.151375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.151406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.151572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.151601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.797 [2024-07-12 19:20:04.151789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.797 [2024-07-12 19:20:04.151819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.797 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.152023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.152052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.152170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.152200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.152414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.152444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.152634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.152669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.152804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.152833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.152999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.153028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.153238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.153269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.153446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.153476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.153684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.153714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.153882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.153912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.154096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.154127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.154309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.154340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.154468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.154498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.154598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.154628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.154745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.154775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.154941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.154970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.155156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.155185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.155376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.155412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.155596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.155626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.155821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.155852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.156086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.156115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.156385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.156415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.156544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.156574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.156802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.156832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.157001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.157030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.157210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.157253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.157420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.157450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.157650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.157679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.157909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.157939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.158041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.158069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.158205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.158247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.158482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.158511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.158697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.158726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.158969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.158998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.159169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.159198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.159324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.159358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.159537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.159566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.159745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.159775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.159963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.159992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.160232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.798 [2024-07-12 19:20:04.160263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.798 qpair failed and we were unable to recover it. 00:28:01.798 [2024-07-12 19:20:04.160540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.160572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.160751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.160782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.160908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.160937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.161117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.161153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.161409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.161440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.161626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.161657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.161834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.161864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.162065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.162094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.162364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.162395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.162515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.162544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.162670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.162701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.162878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.162907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.163108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.163138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.163320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.163351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.163517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.163547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.163725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.163755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.163878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.163907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.164167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.164197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.164440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.164471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.164652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.164682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.164892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.164921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.165090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.165120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.165301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.165333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.165473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.165504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.165770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.165801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.165932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.165961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.166080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.166111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.166357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.166390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.166588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.166620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.166834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.166867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.166993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.167027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.167301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.167334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.167596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.167630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.167894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.167929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.168119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.168151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.168437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.168471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.168612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.168643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.168907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.168940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.169177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.169210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.169422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.169455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.799 [2024-07-12 19:20:04.169638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.799 [2024-07-12 19:20:04.169668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.799 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.169866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.169897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.170067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.170099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.170216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.170267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.170523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.170555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.170724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.170756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.170929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.170959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.171145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.171176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.171452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.171485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.171596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.171626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.171734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.171766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.171882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.171912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.172041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.172071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.172256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.172290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.172474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.172504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.172672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.172702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.172801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.172831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.173076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.173107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.173285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.173318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.173513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.173544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.173732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.173762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.173930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.173960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.174063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.174095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.174275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.174307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.174542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.174574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.174748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.174778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.174962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.174991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.175186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.175216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.175414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.175444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.175639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.175668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.175874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.175905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.176025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.176054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.176220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.176261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.176436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.176467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.176703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.176733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.176906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.176937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.177196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.177244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.177481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.177511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.177781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.177811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.178014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.178043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.178278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.178309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.800 qpair failed and we were unable to recover it. 00:28:01.800 [2024-07-12 19:20:04.178493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.800 [2024-07-12 19:20:04.178521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.178718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.178748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.178929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.178964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.179131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.179161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.179394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.179424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.179605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.179634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.179917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.179946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.180121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.180150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.180316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.180346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.180514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.180543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.180660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.180689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.180892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.180922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.181161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.181191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.181388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.181419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.181595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.181624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.181883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.181912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.182096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.182126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.182245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.182276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.182448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.182478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.182679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.182707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.182875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.182904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.183087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.183116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.183345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.183375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.183500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.183529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.183714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.183743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.183922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.183951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.184118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.184147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.184281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.184311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.184542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.184571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.184754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.184783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.184979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.185009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.185287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.185318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.185489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.185518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.185780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.185810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.186003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.186033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.801 qpair failed and we were unable to recover it. 00:28:01.801 [2024-07-12 19:20:04.186211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.801 [2024-07-12 19:20:04.186248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.186447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.186476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.186576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.186606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.186782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.186812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.186928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.186958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.187188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.187217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.187434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.187465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.187641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.187676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.187843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.187873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.188083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.188112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.188241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.188272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.188486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.188515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.188629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.188659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.188788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.188818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.189053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.189082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.189262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.189293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.189557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.189586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.189771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.189800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.190033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.190063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.190165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.190196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.190333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.190364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.190565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.190595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.190829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.190858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.191041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.191071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.191194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.191222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.191407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.191437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.191547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.191576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.191747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.191775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.191906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.191936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.192041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.192071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.192260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.192292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.192479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.192508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.192710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.192739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.192923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.192952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.193089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.193119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.193380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.193412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.193583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.193612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.193795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.193825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.193933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.193962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.194170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.194199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.802 [2024-07-12 19:20:04.194376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.802 [2024-07-12 19:20:04.194406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.802 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.194580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.194609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.194794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.194823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.195031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.195061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.195237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.195267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.195457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.195487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.195744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.195773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.195890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.195924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.196189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.196218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.196402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.196431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.196598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.196626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.196801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.196830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.197086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.197115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.197377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.197408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.197593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.197622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.197807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.197836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.198015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.198043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.198208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.198253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.198451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.198481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.198711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.198740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.198947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.198976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.199165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.199194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.199446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.199477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.199591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.199622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.199822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.199855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.200089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.200119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.200289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.200319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.200528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.200558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.200739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.200768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.201043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.201072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.201282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.201311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.201588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.201617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.201795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.201825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.202086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.202117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.202315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.202347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.202531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.202562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.202749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.202779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.203032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.203062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.203233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.803 [2024-07-12 19:20:04.203264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.803 qpair failed and we were unable to recover it. 00:28:01.803 [2024-07-12 19:20:04.203385] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.803 [2024-07-12 19:20:04.203423] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.803 [2024-07-12 19:20:04.203431] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.803 [2024-07-12 19:20:04.203437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.803 [2024-07-12 19:20:04.203442] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.804 [2024-07-12 19:20:04.203498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.203527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.203786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.203815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.203981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:01.804 [2024-07-12 19:20:04.204074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.204104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.204068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:01.804 [2024-07-12 19:20:04.204174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:01.804 [2024-07-12 19:20:04.204175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:01.804 [2024-07-12 19:20:04.204372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.204402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.204667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.204696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.205031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.205103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.205405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.205452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.205716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.205747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.206030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.206059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.206285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.206317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.206569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.206598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.206729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.206758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.207016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.207045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.207294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.207326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.207588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.207617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.207883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.207912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.208165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.208194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.208444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.208474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.208659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.208688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.208809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.208838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.209079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.209108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.209366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.209397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.209580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.209609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.209811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.209840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.210041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.210070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.210260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.210291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.210468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.210497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.210694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.210723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.210931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.210960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.211154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.211183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.211439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.211469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.211660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.211689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.211972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.212008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.212184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.212213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.212394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.212424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.212704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.212733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.212943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.212972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.213156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.213186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.213430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.804 [2024-07-12 19:20:04.213460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.804 qpair failed and we were unable to recover it. 00:28:01.804 [2024-07-12 19:20:04.213699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.213729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.213909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.213938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.214090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.214118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.214362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.214392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.214586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.214616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.214827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.214856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.215139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.215169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.215446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.215477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.215691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.215720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.215885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.215915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.216181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.216210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.216419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.216449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.216697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.216728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.217008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.217037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.217320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.217351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.217462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.217491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.217751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.217781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.217957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.217987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.218171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.218201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.218451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.218483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.218662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.218691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.218884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.218914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.219101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.219131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.219368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.219400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.219655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.219686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.219949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.219980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.220235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.220266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.220522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.220552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.220725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.220755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.221040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.221071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.221335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.221367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.221613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.221645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.221846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.221878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.222064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.222096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.805 [2024-07-12 19:20:04.222332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.805 [2024-07-12 19:20:04.222371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.805 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.222657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.222689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.222859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.222890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.223075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.223107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.223278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.223309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.223572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.223603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.223873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.223905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.224093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.224122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.224292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.224324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.224518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.224549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.224747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.224777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.225039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.225068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.225185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.225214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.225394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.225425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.225697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.225728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.225997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.226028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.226260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.226292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.226551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.226581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.226859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.226890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.227098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.227129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.227373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.227404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.227575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.227604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.227887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.227917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.228085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.228114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.228305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.228335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.228535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.228565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.228670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.228700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.228886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.228922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.229159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.229189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.229493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.229525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.229785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.229815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.230012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.230042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.230322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.230352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.230585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.230614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.230783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.230812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.231019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.231049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.231291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.231322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.231508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.231539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.231742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.231773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.231972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.232002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.232179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.232209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.806 [2024-07-12 19:20:04.232409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.806 [2024-07-12 19:20:04.232440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.806 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.232706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.232736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.232975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.233004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.233120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.233149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.233359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.233389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.233574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.233604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.233806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.233835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.233960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.233990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.234107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.234136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.234415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.234447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.234586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.234620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.234789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.234818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.235052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.235083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.235374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.235406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.235643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.235672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.235881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.235912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.236173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.236205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.236405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.236435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.236643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.236673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.236858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.236888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.237132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.237161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.237392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.237422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.237644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.237672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.237956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.237986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.238290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.238320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.238580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.238611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.238811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.238840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.239080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.239117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.239253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.239284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.239481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.239511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.239725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.239755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.239942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.239971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.240167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.240196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.240437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.240497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.240748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.240779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.241070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.241101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.241365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.241396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.241515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.241545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.241729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.241759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.241877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.241906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.242155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.242185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.807 [2024-07-12 19:20:04.242391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.807 [2024-07-12 19:20:04.242423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.807 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.242600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.242629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.242795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.242824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.243083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.243114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.243281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.243311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.243518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.243548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.243720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.243749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.243871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.243900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.244085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.244115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.244242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.244272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.244508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.244538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.244668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.244698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.244843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.244872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.245135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.245166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.245468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.245499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.245779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.245808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.246088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.246117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.246244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.246276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.246452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.246481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.246687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.246716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.246858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.246889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.247047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.247076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.247315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.247346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.247487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.247517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.247705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.247736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.247873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.247902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.248180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.248215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.248413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.248443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.248642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.248673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.248928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.248960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.249215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.249257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.249401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.249433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.249624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.249655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.249940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.249975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.250267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.250304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.250429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.250461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.250715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.250747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.250966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.250997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.251176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.251206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.251504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.251536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.251714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.808 [2024-07-12 19:20:04.251749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.808 qpair failed and we were unable to recover it. 00:28:01.808 [2024-07-12 19:20:04.251975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.252007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.252194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.252223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.252424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.252454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.252635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.252667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.252864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.252895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.253141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.253172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.253404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.253434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.253620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.253659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.253973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.254002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.254248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.254279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.254407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.254437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.254573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.254605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.254861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.254924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.255125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.255167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.255479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.255510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.255763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.255793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.255995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.256025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.256261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.256291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.256426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.256455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.256626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.256655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.256843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.256874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.257115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.257144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.257432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.257463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.257649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.257679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.257950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.257979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.258211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.258258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.258494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.258524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.258820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.258849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.259161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.259191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.259445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.259476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.259741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.259770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.259978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.260007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.260248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.260278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.260409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.260438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.260573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.260602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.260857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.260885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.261140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.261169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.261298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.261329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.261506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.261535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.261670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.261700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.809 [2024-07-12 19:20:04.261877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.809 [2024-07-12 19:20:04.261906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.809 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.262133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.262162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.262338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.262369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.262491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.262519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.262696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.262726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.262845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.262874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.263116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.263144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.263352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.263382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.263648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.263677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.263800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.263828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.264032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.264061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.264164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.264193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.264383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.264421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.264557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.264586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.264798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.264827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.265086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.265115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.265319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.265349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.265602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.265631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.265891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.265920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.266099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.266128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.266262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.266292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.266427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.266456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.266718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.266747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.266882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.266911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.267083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.267113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.267364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.267402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.267603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.267631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.267828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.267858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.268114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.268142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.268405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.268435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.268604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.268633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.268874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.268903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.269149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.269178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.269426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.810 [2024-07-12 19:20:04.269457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.810 qpair failed and we were unable to recover it. 00:28:01.810 [2024-07-12 19:20:04.269641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.269670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.269937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.269966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.270170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.270198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.270550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.270593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.270795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.270826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.271076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.271106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.271365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.271396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.271635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.271664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.271929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.271958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.272163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.272192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.272330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.272360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.272554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.272582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.272763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.272792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.272977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.273006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.273172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.273201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.273339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.273369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.273625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.273654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.273854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.273884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.274015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.274049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.274309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.274339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.274593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.274622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.274792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.274822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.274936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.274965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.275084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.275113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.275313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.275342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.275576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.275604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.275883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.275912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.276115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.276143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.276423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.276453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.276737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.276766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.276936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.276964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.277173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.277202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.277391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.277421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.277620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.277649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.277910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.277940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.278178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.278207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.278450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.278480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.278615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.278644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.278904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.278933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.279112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.279140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.279263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.811 [2024-07-12 19:20:04.279293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.811 qpair failed and we were unable to recover it. 00:28:01.811 [2024-07-12 19:20:04.279477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.279507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.279631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.279660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.279899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.279928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.280184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.280213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.280482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.280511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.280686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.280716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.280921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.280950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.281135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.281164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.281285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.281315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.281575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.281604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.281737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.281766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.282032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.282060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.282294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.282324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.282555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.282584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.282711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.282739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.282852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.282880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.282993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.283021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.283199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.283238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.283447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.283480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.283739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.283769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.283902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.283931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.284166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.284194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.284524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.284553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.284752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.284780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.284985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.285014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.285201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.285244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.285480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.285509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.285760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.285789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.286031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.286060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.286180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.286209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.286482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.286512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.286690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.286719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.286997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.287026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.287133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.287162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.287353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.287384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.287573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.287602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.287834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.287862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.288028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.288057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.288253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.288282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.288537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.288567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.288772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.288801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.812 qpair failed and we were unable to recover it. 00:28:01.812 [2024-07-12 19:20:04.288907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.812 [2024-07-12 19:20:04.288935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.289119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.289148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.289260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.289290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.289458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.289486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.289734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.289768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.289939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.289967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.290079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.290108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.290301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.290330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.290451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.290480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.290734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.290762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.290932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.290960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.291205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.291245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.291436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.291465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.291657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.291685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.291862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.291890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.292092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.292120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.292321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.292350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.292605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.292634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.292902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.292942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.293119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.293148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.293385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.293416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.293650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.293679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.293924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.293953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.294194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.294223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.294422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.294452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.294627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.294655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.294824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.294854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.295090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.295120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.295238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.295268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.295477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.295507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.295677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.295706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.295872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.295907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.296180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.296209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.296449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.296479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.296644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.296674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.296853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.296882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.297150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.297179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.297423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.297453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.297621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.297650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.297905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.297934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.298114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.813 [2024-07-12 19:20:04.298143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.813 qpair failed and we were unable to recover it. 00:28:01.813 [2024-07-12 19:20:04.298333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.298364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.298552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.298581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.298764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.298793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.298972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.299002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.299177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.299206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.299404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.299434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.299621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.299650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.299771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.299801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.300053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.300081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.300349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.300378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.300512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.300541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.300772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.300802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.301079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.301108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.301297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.301327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.301442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.301471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.301659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.301688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.301915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.301943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.302240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.302275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.302539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.302568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.302858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.302887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.303086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.303116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.303304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.303334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.303533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.303562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.303730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.303759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.303935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.303964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.304218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.304255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.304491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.304521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.304757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.304786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.304968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.304997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.305182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.305211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.305453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.305490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.305659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.305688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.305895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.305924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.306099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.814 [2024-07-12 19:20:04.306129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.814 qpair failed and we were unable to recover it. 00:28:01.814 [2024-07-12 19:20:04.306386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.306416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.306600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.306630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.306919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.306948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.307180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.307209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.307408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.307438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.307640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.307668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.307870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.307898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.308078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.308107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.308382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.308412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.308644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.308673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.308858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.308887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.309133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.309164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.309359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.309389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.309599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.309628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.309806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.309835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.310003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.310033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.310206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.310246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.310416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.310446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.310710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.310739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.310929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.310958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.311191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.311220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.311487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.311517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.311775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.311804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.312098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.312133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.312272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.312304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.312497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.312525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.312782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.312810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.312986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.313016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.313245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.313275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.313395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.313424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.313693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.313722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.313962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.313991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.314248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.314277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.314465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.314495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.314759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.314788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.314956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.314984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.315211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.315255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.315374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.315404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.315664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.315693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.315819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.315849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.815 [2024-07-12 19:20:04.316045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.815 [2024-07-12 19:20:04.316075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.815 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.316336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.316366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.316471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.316501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.316681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.316710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.316885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.316914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.317148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.317177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.317356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.317386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.317557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.317586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.317843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.317872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.317999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.318027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.318140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.318170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.318343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.318374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.318549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.318578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.318869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.318898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.319081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.319111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.319281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.319310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.319587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.319616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.319819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.319848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.320014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.320043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.320242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.320274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.320535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.320565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.320799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.320828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.321016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.321045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.321255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.321292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.321489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.321518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.321645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.321673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.321924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.321952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.322223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.322267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.322502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.322530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.322671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.322700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.322926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.322956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.323130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.323159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.323345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.323375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.323566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.323595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.323702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.323731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.323994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.324022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.324244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.324274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.324454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.324484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.324726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.324754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.324982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.325011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.325214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.325253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.816 qpair failed and we were unable to recover it. 00:28:01.816 [2024-07-12 19:20:04.325365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.816 [2024-07-12 19:20:04.325395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:01.817 qpair failed and we were unable to recover it. 00:28:02.095 [2024-07-12 19:20:04.325630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.095 [2024-07-12 19:20:04.325659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.095 qpair failed and we were unable to recover it. 00:28:02.095 [2024-07-12 19:20:04.325774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.095 [2024-07-12 19:20:04.325803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.095 qpair failed and we were unable to recover it. 00:28:02.095 [2024-07-12 19:20:04.326060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.095 [2024-07-12 19:20:04.326090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.095 qpair failed and we were unable to recover it. 00:28:02.095 [2024-07-12 19:20:04.326361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.095 [2024-07-12 19:20:04.326391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.095 qpair failed and we were unable to recover it. 00:28:02.095 [2024-07-12 19:20:04.326623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.095 [2024-07-12 19:20:04.326652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.095 qpair failed and we were unable to recover it. 00:28:02.095 [2024-07-12 19:20:04.326846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.095 [2024-07-12 19:20:04.326875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.095 qpair failed and we were unable to recover it. 00:28:02.095 [2024-07-12 19:20:04.327056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.095 [2024-07-12 19:20:04.327085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.095 qpair failed and we were unable to recover it. 00:28:02.095 [2024-07-12 19:20:04.327352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.095 [2024-07-12 19:20:04.327382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.095 qpair failed and we were unable to recover it. 00:28:02.095 [2024-07-12 19:20:04.327617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.095 [2024-07-12 19:20:04.327650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.095 qpair failed and we were unable to recover it. 00:28:02.095 [2024-07-12 19:20:04.327838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.327868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.328073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.328102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.328221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.328259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.328435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.328465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.328727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.328758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.328905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.328934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.329066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.329096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.329353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.329386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.329622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.329652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.329911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.329940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.330106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.330135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.330306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.330335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.330521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.330550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.330778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.330808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.331054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.331083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.331267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.331297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.331516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.331545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.331776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.331805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.332041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.332070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.332244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.332273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.332579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.332608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.332777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.332805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.333083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.333112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.333281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.333310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.333423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.333452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.333662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.333691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.333898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.333927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.334167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.334196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.334395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.334430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.334537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.334566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.334740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.334769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.334956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.334986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.335266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.335296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.335484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.335514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.335773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.335803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.335987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.336017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.336194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.336235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.336528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.336557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.336692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.336721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.336900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.096 [2024-07-12 19:20:04.336929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.096 qpair failed and we were unable to recover it. 00:28:02.096 [2024-07-12 19:20:04.337191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.337221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.337441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.337470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.337599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.337629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.337829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.337858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.338116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.338145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.338347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.338378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.338498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.338528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.338781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.338812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.339089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.339119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.339392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.339423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.339555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.339586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.339774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.339803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.339984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.340014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.340221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.340275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.340446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.340476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.340686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.340715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.340987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.341017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.341181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.341210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.341495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.341524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.341703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.341732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.341946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.341976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.342164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.342193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.342447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.342477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.342661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.342690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.342891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.342921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.343181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.343209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.343409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.343439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.343699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.343728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.343968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.343998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.344259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.344289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.344406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.344435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.344695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.344727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.344903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.344935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.345142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.345171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.345380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.345410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.345643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.345673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.345936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.345965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.346198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.346238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.346426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.346456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.346734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.097 [2024-07-12 19:20:04.346763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.097 qpair failed and we were unable to recover it. 00:28:02.097 [2024-07-12 19:20:04.346949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.346979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.347238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.347268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.347556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.347585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.347859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.347888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.348129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.348158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.348357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.348387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.348620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.348650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.348857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.348885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.349123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.349152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.349390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.349420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.349591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.349621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.349855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.349885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.350126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.350155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.350327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.350363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.350550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.350579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.350862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.350891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.351074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.351103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.351370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.351400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.351648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.351677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.351914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.351943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.352146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.352176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.352371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.352402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.352610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.352639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.352750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.352780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.353039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.353069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.353265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.353296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.353496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.353526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.353766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.353797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.354056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.354086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.354268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.354298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.354490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.354519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.354761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.354790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.355029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.355059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.355304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.355334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.355520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.355550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.355808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.355837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.355955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.355984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.356217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.356256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.356517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.356547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.356716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.356744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.098 qpair failed and we were unable to recover it. 00:28:02.098 [2024-07-12 19:20:04.357027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.098 [2024-07-12 19:20:04.357057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.357320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.357351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.357544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.357575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.357831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.357860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.357983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.358012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.358199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.358237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.358515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.358545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.358796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.358825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.359081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.359110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.359280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.359310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.359592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.359622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.359908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.359937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.360120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.360149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.360317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.360358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.360565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.360595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.360798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.360827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.361024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.361054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.361217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.361258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.361439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.361469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.361639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.361667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.361933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.361963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.362164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.362193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.362457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.362487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.362682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.362711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.362899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.362929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.363163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.363193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.363319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.363350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.363629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.363660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.363912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.363942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.364199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.364236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.364522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.364553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.364824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.364853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.365082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.365112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.365360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.365391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.365646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.365674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.365884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.365914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.366093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.366122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.366383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.366412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.366705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.366734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.367006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.367036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.099 qpair failed and we were unable to recover it. 00:28:02.099 [2024-07-12 19:20:04.367218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.099 [2024-07-12 19:20:04.367256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.367516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.367546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.367736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.367765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.368024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.368054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.368287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.368334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.368539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.368568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.368800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.368830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.368947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.368977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.369160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.369189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.369373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.369404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.369597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.369627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.369741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.369770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.369937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.369966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.370138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.370172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.370452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.370484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.370719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.370749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.370936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.370966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.371203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.371243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.371351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.371380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.371611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.371640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.371840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.371869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.371988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.372017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.372186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.372215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.372491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.372522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.372696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.372725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.372959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.372989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.373182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.373211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.373350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.373380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.373669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.373698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.373811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.373840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.373959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.373988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.374190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.374219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.374399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.100 [2024-07-12 19:20:04.374430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.100 qpair failed and we were unable to recover it. 00:28:02.100 [2024-07-12 19:20:04.374543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.374572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.374772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.374801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.374982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.375011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.375175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.375205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.375341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.375372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.375541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.375569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.375750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.375779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.375979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.376018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.376139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.376169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.376350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.376381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.376612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.376641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.376771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.376801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.376991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.377021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.377128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.377158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.377329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.377360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.377595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.377625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.377832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.377861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.378048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.378077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.378191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.378220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.378506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.378536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.378667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.378702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.378935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.378964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.379143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.379173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.379420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.379450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.379617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.379646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.379814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.379844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.380009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.380038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.380152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.380181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.380383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.380413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.380602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.380632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.380803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.380833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.381039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.381069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.381182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.381211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.381438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.381468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.381653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.381683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.381815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.381845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.382029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.382058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.382323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.382353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.382456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.382485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.382662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.382690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.101 [2024-07-12 19:20:04.382859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.101 [2024-07-12 19:20:04.382888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.101 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.383071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.383100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.383269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.383298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.383554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.383583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.383750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.383779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.383895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.383924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.384109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.384138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.384333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.384376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.384620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.384651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.384781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.384810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.385008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.385037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.385240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.385272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.385453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.385483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.385680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.385709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.385943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.385973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.386247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.386277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.386484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.386514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.386644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.386674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.386840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.386869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.386993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.387022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.387138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.387181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.387375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.387406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.387585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.387614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.387792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.387821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.387992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.388022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.388194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.388235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.388440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.388470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.388604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.388634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.388758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.388787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.388964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.388993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.389198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.389237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.389363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.389393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.389581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.389610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.389724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.389753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.389987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.390017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.390125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.390154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.390324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.390355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.390524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.390553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.390648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.390677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.390861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.390891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.391097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.102 [2024-07-12 19:20:04.391125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.102 qpair failed and we were unable to recover it. 00:28:02.102 [2024-07-12 19:20:04.391250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.391279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.391447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.391477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.391709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.391738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.391857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.391887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.392009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.392038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.392269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.392298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.392573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.392615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.392866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.392896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.393084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.393113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.393377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.393409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.393587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.393617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.393813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.393842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.394046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.394074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.394192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.394221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.394405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.394435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.394617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.394646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.394817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.394847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.394956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.394984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.395167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.395196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.395441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.395471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.395658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.395688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.395859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.395888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.395989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.396019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.396204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.396240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.396372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.396401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.396657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.396686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.396853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.396882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.397065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.397095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.397271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.397302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.397481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.397511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.397772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.397801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.397903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.397932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.398167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.398196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.398449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.398486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.398611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.398640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.398814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.398843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.399073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.399103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.399272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.399302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.399479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.399508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.399692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.399722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.399955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.399983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.103 qpair failed and we were unable to recover it. 00:28:02.103 [2024-07-12 19:20:04.400170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.103 [2024-07-12 19:20:04.400198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.400326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.400360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.400494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.400523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.400717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.400746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.400877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.400906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.401093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.401123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.401338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.401368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.401549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.401578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.401763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.401792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.401974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.402004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.402242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.402272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.402405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.402434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.402642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.402672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.402904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.402933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.403215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.403256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.403445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.403475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.403709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.403739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.403983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.404012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.404277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.404307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.404506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.404538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.404789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.404818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.405105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.405134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.405398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.405429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.405639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.405668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.405912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.405941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.406238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.406269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.406556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.406585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.406844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.406873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.407106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.407136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.407399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.407429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.407565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.407594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.407793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.407822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.408078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.408107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.408322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.408352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.408592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.104 [2024-07-12 19:20:04.408621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.104 qpair failed and we were unable to recover it. 00:28:02.104 [2024-07-12 19:20:04.408807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.408836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.409022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.409051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.409297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.409327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.409610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.409639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.409920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.409949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.410255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.410285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.410476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.410505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.410747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.410776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.410960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.410990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.411247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.411277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.411488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.411519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.411763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.411797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.412029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.412059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.412179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.412208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.412504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.412533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.412784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.412813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.413098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.413127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.413409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.413440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.413648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.413677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.413919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.413948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.414207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.414252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.414533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.414564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.415006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.415040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.415300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.415336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.415594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.415624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.415872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.415902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.416160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.416190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.416385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.416416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.416696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.416726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.416998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.417028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.417319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.417349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.417622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.417652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.417833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.417864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.418119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.418149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.418384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.418415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.418585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.418615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.418798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.418827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.419025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.419054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.419314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.419343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.419611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.419642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.419893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.105 [2024-07-12 19:20:04.419923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.105 qpair failed and we were unable to recover it. 00:28:02.105 [2024-07-12 19:20:04.420180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.420209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.420504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.420534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.420753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.420783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.421014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.421043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.421298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.421328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.421588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.421618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.421792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.421821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.422008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.422036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.422214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.422260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.422521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.422551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.422748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.422778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.423011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.423045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.423326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.423357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.423549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.423579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.423763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.423792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.424073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.424102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.424337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.424367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.424543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.424573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.424807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.424837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.425068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.425097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.425352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.425382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.425615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.425645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.425771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.425799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.426058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.426087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.426343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.426372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.426614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.426644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.426829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.426859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.427119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.427148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.427382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.427412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.427581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.427611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.427779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.427808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.428092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.428121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.428307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.428338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.428518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.428547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.428750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.428778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.428948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.428977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.429084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.429114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.429386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.429417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.429600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.429634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.429805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.429834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.430092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.430121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.106 qpair failed and we were unable to recover it. 00:28:02.106 [2024-07-12 19:20:04.430406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.106 [2024-07-12 19:20:04.430436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.430634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.430663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.430829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.430858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.430986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.431016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.431183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.431212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.431406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.431436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.431718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.431747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.431945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.431974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.432205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.432243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.432456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.432486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.432698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.432727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.432975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.433005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.433266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.433296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.433556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.433585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.433751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.433780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.433964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.433993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.434246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.434277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.434444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.434474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.434756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.434786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.434980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.435009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.435177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.435206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.435396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.435426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.435630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.435659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.435779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.435808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.435980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.436009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.436257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.436287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.436469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.436499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.436734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.436763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.437006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.437035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.437294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.437324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.437519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.437549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.437731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.437760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.437961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.437990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.438178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.438207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.438407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.438437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.438555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.438584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.438859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.438889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.439146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.439175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.439415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.439451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.439686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.439716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.439944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.439974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.107 [2024-07-12 19:20:04.440142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.107 [2024-07-12 19:20:04.440172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.107 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.440445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.440475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.440736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.440765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.441045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.441076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.441352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.441382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.441621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.441651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.441935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.441964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.442260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.442292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.442528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.442558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.442814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.442843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.443025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.443055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.443232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.443263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.443517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.443546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.443782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.443811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.444066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.444096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.444267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.444296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.444411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.444439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.444608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.444638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.444914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.444943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.445174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.445203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.445472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.445502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.445784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.445813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.446026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.446056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.446257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.446288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.446522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.446560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.446850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.446879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.447049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.447078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.447337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.447369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.447571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.447600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.447855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.447884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.448069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.448098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.108 [2024-07-12 19:20:04.448343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.108 [2024-07-12 19:20:04.448373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.108 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.448578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.448608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.448774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.448804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.449077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.449106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.449303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.449333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.449589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.449619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.449824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.449853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.450152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.450190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.450451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.450483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.450726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.450755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.451036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.451066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.451300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.451331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.451583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.451613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.451862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.451891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.452172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.452202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.452484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.452515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.452721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.452751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.453033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.453062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.453293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.453323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.453580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.453610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.453818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.453854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.454114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.454144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.454387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.454417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.454606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.454636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.454869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.454898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.455075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.455104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.455291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.455322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.455599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.455628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.455870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.455900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.456157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.456186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.456454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.456484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.456726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.456755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.456935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.456964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.457131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.457161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.457423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.457454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.457735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.457763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.457966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.457995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.458235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.458265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.458447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.458477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.458734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.458763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.459026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.459055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.109 qpair failed and we were unable to recover it. 00:28:02.109 [2024-07-12 19:20:04.459289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.109 [2024-07-12 19:20:04.459319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.459579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.459608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.459776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.459805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.460088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.460117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.460301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.460331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.460597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.460626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.460878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.460949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.461124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.461180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.461409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.461442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.461728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.461757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.461924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.461953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.462131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.462160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.462395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.462425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.462608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.462637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.462835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.462865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.462991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.463020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.463192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.463221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.463474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.463504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.463752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.463782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.463948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.463977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.464173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.464202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.464392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.464422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.464590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.464619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.464790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.464819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.464995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.465025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.465308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.465339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.465595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.465624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.465809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.465838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.466074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.466103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.466305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.466335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.466523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.466551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.466727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.466755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.467019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.467048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.467314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.467350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.467594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.467623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.467885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.467914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.468041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.468070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.468245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.468274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.468452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.468481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.468669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.468698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.468945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.110 [2024-07-12 19:20:04.468974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.110 qpair failed and we were unable to recover it. 00:28:02.110 [2024-07-12 19:20:04.469146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.469175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.469382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.469412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.469673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.469702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.469957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.469985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.470217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.470263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.470452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.470481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.470722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.470751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.470960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.470989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.471222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.471262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.471445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.471475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.471642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.471671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.471952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.471980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.472165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.472195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.472357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.472392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.472659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.472688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.472958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.472987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.473154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.473184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.473448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.473477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.473716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.473745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.473960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.473995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.474182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.474211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.474467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.474498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.474756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.474785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.475066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.475095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.475379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.475409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.475586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.475615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.475849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.475878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.476130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.476160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.476343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.476373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.476633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.476662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.476845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.476875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.477131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.477159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.477393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.477422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.477690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.477720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.477839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.477868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.478127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.478157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.478440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.478470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.478726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.478756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.479059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.479089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.479352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.479382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.479678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.111 [2024-07-12 19:20:04.479707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.111 qpair failed and we were unable to recover it. 00:28:02.111 [2024-07-12 19:20:04.479979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.480009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.480223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.480259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.480513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.480543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.480709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.480738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.480935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.480964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.481152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.481190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.481469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.481503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.481707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.481737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.481971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.482001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.482262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.482293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.482481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.482510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.482688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.482718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.483015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.483044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.483308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.483338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.483590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.483620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.483798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.483827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.484060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.484089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.484255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.484284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.484471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.484500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.484715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.484745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.485001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.485030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.485220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.485257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.485429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.485458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.485743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.485773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.486029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.486058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.486250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.486280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.486543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.486572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.486867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.486897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.487164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.487193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.487445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.487475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.487730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.487760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.488019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.488048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.488236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.488271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.488480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.488510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.488691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.488721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.488925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.488954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.489240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.489270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.489547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.489577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.489809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.489839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.490101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.490131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.490389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.112 [2024-07-12 19:20:04.490420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.112 qpair failed and we were unable to recover it. 00:28:02.112 [2024-07-12 19:20:04.490668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.490698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.490831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.490860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.491115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.491144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.491321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.491352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.491632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.491661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.491930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.491959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.492210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.492247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.492498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.492528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.492658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.492687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.492941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.492971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.493204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.493240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.493499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.493529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.493715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.493744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.494002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.494031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.494271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.494302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.494470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.494499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.494729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.494759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.494924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.494953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.495207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.495251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.495374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.495403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.495635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.495664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.495856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.495885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.496072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.496101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.496349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.496379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.496661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.496690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.496883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.496913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.497147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.497176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.497433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.497463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.497651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.497680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.497865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.497894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.498079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.498108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.498428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.498458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.498692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.498760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.499076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.499111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.499245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.113 [2024-07-12 19:20:04.499277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.113 qpair failed and we were unable to recover it. 00:28:02.113 [2024-07-12 19:20:04.499496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.499526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.499662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.499691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.499958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.499987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.500198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.500238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.500419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.500449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.500641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.500673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.500805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.500834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.501082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.501111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.501371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.501402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.501610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.501639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.501760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.501797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.502060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.502089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.502289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.502318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.502623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.502652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.502836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.502865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.503121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.503150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.503342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.503372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.503504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.503533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.503769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.503798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.503931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.503960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.504195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.504235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.504369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.504399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.504634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.504663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.504836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.504865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.505047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.505077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.505290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.505321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.505459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.505489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.505658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.505688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.505879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.505909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.506109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.506138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.506324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.506354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.506561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.506590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.506801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.506830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.507012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.507041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.507244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.507274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.507463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.507493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.507767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.507796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.508056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.508089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.508270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.508300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.508445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.508475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.114 [2024-07-12 19:20:04.508729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.114 [2024-07-12 19:20:04.508759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.114 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.509020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.509049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.509176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.509205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.509430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.509460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.509579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.509608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.509842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.509871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.510006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.510035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.510260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.510291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.510433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.510463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.510652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.510681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.510923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.510952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.511199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.511239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.511414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.511443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.511681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.511710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.511906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.511935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.512130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.512158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.512290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.512320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.512586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.512615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.512850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.512879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.513114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.513143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.513312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.513341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.513577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.513606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.513814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.513842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.514023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.514052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.514259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.514295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.514436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.514465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.514585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.514613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.514804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.514832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.515085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.515114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.515292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.515321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.515444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.515473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.515599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.515628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.515872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.515900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.516140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.516169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.516284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.516314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.516550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.516579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.516767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.516796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.516996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.517025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.517256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.517286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.517522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.517551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.517676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.517705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.517887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.115 [2024-07-12 19:20:04.517916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.115 qpair failed and we were unable to recover it. 00:28:02.115 [2024-07-12 19:20:04.518184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.518214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.518400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.518430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.518620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.518649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.518779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.518808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.518974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.519002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.519175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.519204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.519430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.519459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.519693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.519722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.519911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.519941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.520144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.520173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.520393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.520423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.520541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.520570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.520770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.520799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.521040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.521068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.521334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.521363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.521623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.521651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.521772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.521802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.522043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.522071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.522264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.522293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.522544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.522573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.522709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.522738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.522955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.522985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.523166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.523197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.523473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.523510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.523809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.523838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.524047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.524077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.524314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.524345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.524661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.524691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.524904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.524934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.525194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.525223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.525493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.525523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.525731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.525761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.525893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.525922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.526110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.526139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.526403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.526437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.526693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.526722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.526975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.527010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.527299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.527330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.527568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.527599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.527778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.527808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.116 [2024-07-12 19:20:04.528043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.116 [2024-07-12 19:20:04.528073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.116 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.528223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.528266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.528498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.528527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.528762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.528792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.529060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.529089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.529215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.529256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.529388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.529418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.529543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.529572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.529701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.529730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.529901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.529930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.530139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.530169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.530378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.530409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.530610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.530639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.530886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.530915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.531219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.531259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.531455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.531485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.531657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.531686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.531872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.531901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.532072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.532101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.532325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.532358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.532609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.532639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.532840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.532869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.533132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.533161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd87c000b90 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.533371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.533404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.533547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.533576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.533760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.533789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.533977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.534006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.534212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.534270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.534401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.534429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.534625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.534654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.534902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.534930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.535112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.535142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.535404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.535434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.535623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.535651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.535829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.117 [2024-07-12 19:20:04.535857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.117 qpair failed and we were unable to recover it. 00:28:02.117 [2024-07-12 19:20:04.536132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.536161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.536423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.536452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.536663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.536692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.536877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.536905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.537163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.537192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.537363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.537392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.537655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.537685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.537856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.537886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.538166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.538195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.538332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.538362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.538503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.538532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.538671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.538700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.538941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.538969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.539236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.539266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.539377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.539406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.539602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.539637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.539832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.539861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.539997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.540025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.540220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.540272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.540477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.540506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.540645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.540674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.540879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.540907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.541093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.541122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.541358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.541387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.541565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.541594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.541778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.541807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.541986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.542015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.542136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.542165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.542374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.542406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.542590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.542619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.542864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.542892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.543095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.543125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.543246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.543277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.543511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.543541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.543676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.543705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.543816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.543844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.544024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.544053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.544244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.544274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.544467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.544495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.544612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.544641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.544916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.118 [2024-07-12 19:20:04.544945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.118 qpair failed and we were unable to recover it. 00:28:02.118 [2024-07-12 19:20:04.545186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.545214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.545459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.545488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.545687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.545716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.545999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.546028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.546195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.546223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.546368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.546400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.546502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.546531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.546669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.546698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.546884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.546913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.547041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.547071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.547312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.547342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.547479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.547508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.547631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.547660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.547869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.547898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.548085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.548114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.548292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.548326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.548518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.548547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.548747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.548775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.549052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.549081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.549284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.549314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.549501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.549530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.549713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.549742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.549860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.549888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.550009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.550039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.550243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.550273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.550462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.550491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.550682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.550711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.550885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.550915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.551200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.551239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.551434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.551463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.551582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.551612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.551788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.551816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.551992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.552021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.552293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.552325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.552450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.552479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.552711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.552740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.552948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.552978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.553156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.553185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.553442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.553473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.553650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.553678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.553850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.119 [2024-07-12 19:20:04.553879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.119 qpair failed and we were unable to recover it. 00:28:02.119 [2024-07-12 19:20:04.554002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.554031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.554155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.554189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.554321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.554350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.554540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.554569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.554777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.554806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.555013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.555042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.555223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.555277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.555464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.555494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.555618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.555647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.555792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.555823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.556029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.556059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.556297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.556328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.556505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.556535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.556650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.556679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.556870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.556899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.557136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.557165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.557288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.557318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.557601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.557630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.557808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.557837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.557954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.557989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.558238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.558268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.558404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.558433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.558613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.558642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.558811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.558840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.559024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.559053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.559254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.559284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.559455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.559484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.559610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.559639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.559810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.559838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.560053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.560083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.560298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.560329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.560458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.560487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.560692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.560721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.560933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.560963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.561077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.561105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.561363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.561394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.561508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.561536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.561744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.561773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.561876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.561905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 [2024-07-12 19:20:04.562170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.562199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcebed0 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.120 A controller has encountered a failure and is being reset. 00:28:02.120 [2024-07-12 19:20:04.562457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.120 [2024-07-12 19:20:04.562525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.120 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.562672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.562706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.562932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.562963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.563129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.563158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.563354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.563386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.563508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.563538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.563731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.563760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.564045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.564074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.564198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.564240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.564372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.564402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.564541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.564570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.564753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.564782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.565013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.565042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.565221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.565266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.565441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.565471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.565599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.565634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.565808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.565838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.566061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.566090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.566256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.566287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.566482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.566511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.566638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.566668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.566942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.566972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.567177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.567207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.567392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.567422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.567601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.567630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.567815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.567845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.568030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.568060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.568352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.568382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.568616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.568645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.568774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.568804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.569045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.569074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.569193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.569222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.569416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.569447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.569634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.569664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.569857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.121 [2024-07-12 19:20:04.569887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.121 qpair failed and we were unable to recover it. 00:28:02.121 [2024-07-12 19:20:04.570163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.570192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.570357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.570389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.570502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.570532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.570655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.570684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.570792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.570821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.570932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.570961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.571135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.571165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.571310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.571341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.571464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.571493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.571667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.571696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.571957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.571986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.572218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.572259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.572405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.572435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.572637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.572666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.572790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.572819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.572998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.573028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.573256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.573288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.573423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.573452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.573567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.573596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.573763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.573791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.573980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.574014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.574210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.574251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.574380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.574409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.574547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.574576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.574683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.574712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.575059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.575089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.575261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.575291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.575489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.575518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.575626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.575655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.575768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.575797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.575993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.576022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.576197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.576233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.576453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.576484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.576602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.576631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.576755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.576784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.576896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.576925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.577186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.577215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.577350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.577380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.577514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.577543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.577655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.577684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.122 [2024-07-12 19:20:04.577807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.122 [2024-07-12 19:20:04.577836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.122 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.578076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.578106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.578290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.578320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.578505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.578534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.578665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.578694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.578929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.578958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.579200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.579238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.579387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.579432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.579569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.579601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.579736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.579767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.579970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.580000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.580204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.580251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.580427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.580457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.580588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.580618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.580751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.580781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.580908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.580938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.581064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.581093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.581282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.581313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.581441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.581471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.581594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.581625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.581766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.581805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.582037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.582068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.582179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.582209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.582342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.582372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.582483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.582512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.582628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.582658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.582864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.582894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.583131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.583161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.583394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.583425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.583560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.583590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.583769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.583798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.583986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.584017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.584181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.584211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.584481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.584512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.584707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.584737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.584994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.585024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.585266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.585296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.585439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.585469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.585671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.585701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.585897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.585927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.586115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.586145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.123 qpair failed and we were unable to recover it. 00:28:02.123 [2024-07-12 19:20:04.586290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.123 [2024-07-12 19:20:04.586321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.586602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.586632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.586831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.586860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.587096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.587125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.587376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.587406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.587550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.587580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.587890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.587926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.588174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.588204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.588362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.588392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.588520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.588549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.588719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.588748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.588865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.588894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.589075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.589104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.589278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.589308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.589498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.589528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.589724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.589753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.590037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.590067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.590248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.590278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.590464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.590493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.590674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.590708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.590967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.590996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.591316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.591346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.591585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.591614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.591725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.591754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.591986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.592016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.592283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.592313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.592546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.592574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.592761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.592790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.592986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.593016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.593271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.593302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.593539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.593567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.593817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.593846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.594015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.594044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.594316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.594347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.594534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.594563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.594798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.594828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.595053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.595082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.595266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.595296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.595483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.595512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.595651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.595681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.595926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.124 [2024-07-12 19:20:04.595956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.124 qpair failed and we were unable to recover it. 00:28:02.124 [2024-07-12 19:20:04.596141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.596170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.596357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.596387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.596510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.596540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.596658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.596687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.596815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.596845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.597082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.597116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.597356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.597387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.597528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.597557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.597739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.597770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.598016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.598046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.598221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.598259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.598392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.598422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.598552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.598582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.598899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.598929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.599039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.599069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.599353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.599383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.599615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.599644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.599881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.599911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.600088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.600124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.600329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.600359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.600635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.600666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.600925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.600955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.601198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.601249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.601444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.601474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.601656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.601685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.601913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.601942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.602177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.602206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.602461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.602492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.602742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.602772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.602952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.602982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.603217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.603256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.603438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.603468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.603758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.603788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.604020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.604049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.604307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.604339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.604480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.604509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.604687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.604717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.605035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.605064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.605350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.605381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.605560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.605590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.125 qpair failed and we were unable to recover it. 00:28:02.125 [2024-07-12 19:20:04.605828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.125 [2024-07-12 19:20:04.605859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.606034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.606064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.606303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.606333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.606516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.606546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.606737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.606766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.606907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.606939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.607142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.607172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.607433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.607464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.607754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.607782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.607995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.608024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.608328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.608358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.608544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.608572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.608744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.608774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.609029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.609058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.609258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.609287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.609494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.609523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.609720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.609750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.609930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.609959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.610145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.610180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.610401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.610431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.610644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.610673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.610879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.610908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.611085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.611114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.611369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.611401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.611635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.611664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.611859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.611888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.612087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.612116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.612375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.612405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.612591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.612621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.612765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.612794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.612922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.612951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.613191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.613221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.613436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.613466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.613661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.613690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.613955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.126 [2024-07-12 19:20:04.613985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.126 qpair failed and we were unable to recover it. 00:28:02.126 [2024-07-12 19:20:04.614242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.614272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.614405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.614435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.614601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.614630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.614794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.614824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.615029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.615059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.615293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.615323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.615499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.615529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.615821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.615850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.616052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.616081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.616363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.616393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd86c000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.616571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.616604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.616854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.616885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.617164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.617194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.617402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.617433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.617666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.617695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.617966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.617996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.618134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.618164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.618364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.618395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.618654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.618684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.618810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.618840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.618983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.619012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.619200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.619246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.619425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.619454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.619584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.619620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.619861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.619890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.620155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.620184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.620459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.620491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.620701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.620731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.620971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.621001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.621266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.621296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.621559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.621588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.621717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.621746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.621939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.621968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.622241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.622271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.622550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.622580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.622776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.622806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.622992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.623021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.623261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.623291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.623478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.623508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.623624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.623653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.127 [2024-07-12 19:20:04.623841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.127 [2024-07-12 19:20:04.623870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.127 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.624104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.624134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.624262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.624292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.624477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.624506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.624762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.624793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.625090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.625120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.625324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.625354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.625489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.625518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.625639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.625668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.625909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.625938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.626120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.626155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.626278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.626308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.626443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.626473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.626648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.626677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.626886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.626916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.627041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.627071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.627347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.627378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.627549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.627578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.627771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.627800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.627989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.628019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.628236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.628267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.628457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.628487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.628668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.628697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.628940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.628970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.629184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.629213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.629397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.629427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.629542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.629572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.629779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.629810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.630043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.630072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.630269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.630300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.630415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.630444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.630652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.630682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.630958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.630987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.631165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.631194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.631389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.631420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.631656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.631686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.631928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.631958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.632181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.632211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.632473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.632504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.632628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.632657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.128 [2024-07-12 19:20:04.632842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.128 [2024-07-12 19:20:04.632871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.128 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.633068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.633097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.633218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.633255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.633458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.633489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.633672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.633703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.633885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.633914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.634100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.634129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.634314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.634345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.634541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.634570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.634785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.634814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.634937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.634971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.635209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.635247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.635431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.635460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.635743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.635773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.636048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.636077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.636322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.636353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.636542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.636572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.636694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.636723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.636922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.636952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.637137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.637168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.637386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.637416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.637628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.637658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.637882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.637911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.638115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.638145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.638318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.638348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.638527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.638556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.638752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.638781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.639068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.639098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.639276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.639307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.639436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.639465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.639714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.639742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.639952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.639981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.640244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.640274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.640453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.640482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.640738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.640767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.641070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.641099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.641348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.641379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.641576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.641606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.641770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.641799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.642077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.642107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.642283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.642314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.129 [2024-07-12 19:20:04.642522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.129 [2024-07-12 19:20:04.642551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.129 qpair failed and we were unable to recover it. 00:28:02.130 [2024-07-12 19:20:04.642688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.130 [2024-07-12 19:20:04.642717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.130 qpair failed and we were unable to recover it. 00:28:02.130 [2024-07-12 19:20:04.642898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.130 [2024-07-12 19:20:04.642928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.130 qpair failed and we were unable to recover it. 00:28:02.130 [2024-07-12 19:20:04.643215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.130 [2024-07-12 19:20:04.643269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.130 qpair failed and we were unable to recover it. 00:28:02.130 [2024-07-12 19:20:04.643405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.130 [2024-07-12 19:20:04.643434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.130 qpair failed and we were unable to recover it. 00:28:02.130 [2024-07-12 19:20:04.643621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.643651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.643914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.643945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.644123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.644152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.644344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.644374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.644574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.644609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.644752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.644781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.644966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.644997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.645176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.645205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.645403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.645433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.645611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.645640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.645819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.645849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.646051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.646080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.646265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.646296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.646479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.646508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.646739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.646769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.647015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.647044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.647294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.647324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.647562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.647592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.647777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.647807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.647994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.648025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.648201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.648250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.648550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.648581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.648757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.648787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.393 [2024-07-12 19:20:04.648987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.393 [2024-07-12 19:20:04.649017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.393 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.649134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.649164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.649350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.649380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.649583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.649613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.649752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.649782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.649984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.650012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.650267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.650298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.650504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.650534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.650744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.650774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.650991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.651021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.651296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.651327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.651506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.651535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.651714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.651743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.651929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.651959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.652183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.652213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.652363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.652394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.652562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.652592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.652731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.652760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.653012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.653041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.653262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.653292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.653548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.653577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.653838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.653874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.654006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.654036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.654216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.654258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.654426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.654456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.654693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.654722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.654972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.655002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.655244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.655275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.655412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.655441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.655626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.655656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.655889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.655919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.656044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.656073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.656273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.394 [2024-07-12 19:20:04.656304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.394 qpair failed and we were unable to recover it. 00:28:02.394 [2024-07-12 19:20:04.656421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.656451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.656701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.656731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.657012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.657042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.657239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.657270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.657454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.657483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.657675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.657705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.657995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.658024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.658296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.658326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.658511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.658540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.658706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.658735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.658863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.658892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.659123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.659153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.659418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.659449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.659630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.659659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.659769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.659800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.660023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.660053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.660296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.660327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.660509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.660539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.660717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.660747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.661029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.661059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.661255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.661285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.661474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.661504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.661784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.661813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.662052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.662081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.662346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.662376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.662517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.662547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.662717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.662746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.663040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.663069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.663259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.663296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.663423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.663454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.395 [2024-07-12 19:20:04.663557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.395 [2024-07-12 19:20:04.663587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.395 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.663724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.663753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.663948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.663978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.664188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.664217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.664351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.664381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.664556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.664585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.664765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.664794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.665049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.665080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.665363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.665394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.665512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.665542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.665770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.665800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.666042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.666071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.666348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.666378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.666590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.666621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.666815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.666845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.666983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.667012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.667198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.667237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.667425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.667455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.667634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.667664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.667840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.667869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.668124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.668153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.668345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.668375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.668560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.668590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.668719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.668749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.668959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.668989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.669249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.669281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.669549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.669578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.669867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.669897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.670128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.670158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.670303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.670334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.670593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.670624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.670809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.670839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.671005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.671035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.671299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.396 [2024-07-12 19:20:04.671329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.396 qpair failed and we were unable to recover it. 00:28:02.396 [2024-07-12 19:20:04.671452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.671482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.671655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.671684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.671873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.671903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.672103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.672131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.672310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.672346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.672537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.672567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.672776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.672806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.673048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.673077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.673358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.673389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.673574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.673603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.673782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.673812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.674005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.674035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.674217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.674258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.674516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.674546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.674731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.674760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.674960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.674989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.675158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.675187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.675332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.675362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.675545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.675575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.675821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.675851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.676136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.676166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.676351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.676382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.676555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.676585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.676771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.676799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.677092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.677121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.677329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.677360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.677479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.677509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.677740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.677770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.678001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.678030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.678205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.678244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.678432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.397 [2024-07-12 19:20:04.678462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.397 qpair failed and we were unable to recover it. 00:28:02.397 [2024-07-12 19:20:04.678597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.678626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.678793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.678823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.679081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.679110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.679360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.679390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.679575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.679606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.679846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.679875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.680082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.680111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.680371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.680417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.680657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.680686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.680884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.680914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.681147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.681177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.681422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.681453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.681594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.681623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.681727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.681762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.681959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.681988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.682249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.682280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.682479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.682509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.682763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.682792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.683057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.683086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.683324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.683355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.683545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.683574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.683698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.398 [2024-07-12 19:20:04.683727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.398 qpair failed and we were unable to recover it. 00:28:02.398 [2024-07-12 19:20:04.683898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.683928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.684189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.684219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.684493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.684523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.684763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.684791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.685060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.685089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.685380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.685411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.685665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.685695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.685909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.685938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.686132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.686161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.686420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.686451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.686654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.686683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.686884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.686914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.687186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.687216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.687507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.687536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.687817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.687846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.688035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.688063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.688245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.688274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.688449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.688478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.688615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.688645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.688835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.688865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.689055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.689085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.689302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.689333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.689526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.689555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.689692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.689722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.689982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.690011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.690188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.690217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.690500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.690531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.690714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.690744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.691031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.691061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.691337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.691368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.399 qpair failed and we were unable to recover it. 00:28:02.399 [2024-07-12 19:20:04.691646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.399 [2024-07-12 19:20:04.691676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.400 qpair failed and we were unable to recover it. 00:28:02.400 [2024-07-12 19:20:04.691997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.400 [2024-07-12 19:20:04.692033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.400 qpair failed and we were unable to recover it. 00:28:02.400 [2024-07-12 19:20:04.692169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.400 [2024-07-12 19:20:04.692199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd874000b90 with addr=10.0.0.2, port=4420 00:28:02.400 qpair failed and we were unable to recover it. 00:28:02.400 [2024-07-12 19:20:04.692464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.400 [2024-07-12 19:20:04.692519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfa000 with addr=10.0.0.2, port=4420 00:28:02.400 [2024-07-12 19:20:04.692543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa000 is same with the state(5) to be set 00:28:02.400 [2024-07-12 19:20:04.692576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfa000 (9): Bad file descriptor 00:28:02.400 [2024-07-12 19:20:04.692600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.400 [2024-07-12 19:20:04.692619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.400 [2024-07-12 19:20:04.692640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.400 Unable to reset the controller. 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.400 Malloc0 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.400 [2024-07-12 19:20:04.943479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.400 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.659 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.659 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:02.659 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.659 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.659 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.659 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.659 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.659 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.660 [2024-07-12 19:20:04.975708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.660 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.660 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:02.660 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.660 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:02.660 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.660 19:20:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 464704 00:28:03.228 Controller properly reset. 00:28:08.499 Initializing NVMe Controllers 00:28:08.499 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:08.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:08.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:08.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:08.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:08.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:08.499 Initialization complete. Launching workers. 00:28:08.499 Starting thread on core 1 00:28:08.499 Starting thread on core 2 00:28:08.499 Starting thread on core 3 00:28:08.499 Starting thread on core 0 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:08.499 00:28:08.499 real 0m11.291s 00:28:08.499 user 0m36.381s 00:28:08.499 sys 0m5.852s 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.499 ************************************ 00:28:08.499 END TEST nvmf_target_disconnect_tc2 00:28:08.499 ************************************ 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:08.499 rmmod nvme_tcp 00:28:08.499 rmmod nvme_fabrics 00:28:08.499 rmmod nvme_keyring 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 465396 ']' 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 465396 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 465396 ']' 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 465396 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 465396 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 465396' 00:28:08.499 killing process with pid 465396 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 465396 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 465396 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.499 19:20:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.406 19:20:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:10.406 00:28:10.406 real 0m19.796s 00:28:10.406 user 1m3.430s 00:28:10.406 sys 0m10.760s 00:28:10.406 19:20:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:10.406 19:20:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:10.406 ************************************ 00:28:10.406 END TEST nvmf_target_disconnect 00:28:10.406 ************************************ 00:28:10.406 19:20:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:10.406 19:20:12 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:28:10.406 19:20:12 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:10.406 19:20:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:10.406 19:20:12 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:28:10.406 00:28:10.406 real 21m34.241s 00:28:10.406 user 46m21.232s 00:28:10.406 sys 6m35.482s 00:28:10.406 19:20:12 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:10.406 19:20:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:10.406 ************************************ 00:28:10.406 END TEST nvmf_tcp 00:28:10.406 ************************************ 00:28:10.406 19:20:12 -- common/autotest_common.sh@1142 -- # return 0 00:28:10.406 19:20:12 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:28:10.406 19:20:12 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:10.406 19:20:12 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:10.406 19:20:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:10.406 19:20:12 -- common/autotest_common.sh@10 -- # set +x 00:28:10.406 ************************************ 00:28:10.406 START TEST spdkcli_nvmf_tcp 00:28:10.406 ************************************ 00:28:10.406 19:20:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:10.406 * Looking for test storage... 00:28:10.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:10.665 19:20:12 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=466925 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 466925 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 466925 ']' 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:10.665 19:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:10.665 [2024-07-12 19:20:13.058916] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:28:10.665 [2024-07-12 19:20:13.058963] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466925 ] 00:28:10.665 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.665 [2024-07-12 19:20:13.126116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:10.665 [2024-07-12 19:20:13.206850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.665 [2024-07-12 19:20:13.206851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.603 19:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:11.603 19:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:28:11.603 19:20:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:11.603 19:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:11.603 19:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.603 19:20:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:11.603 19:20:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:11.603 19:20:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:11.603 19:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:11.603 19:20:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.603 19:20:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:11.603 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:11.603 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:11.603 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:11.603 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:11.603 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:11.603 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:11.603 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:11.603 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:11.603 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:11.603 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:11.603 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:11.603 ' 00:28:14.141 [2024-07-12 19:20:16.521583] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.518 [2024-07-12 19:20:17.805793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:18.048 [2024-07-12 19:20:20.193211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:28:19.953 [2024-07-12 19:20:22.251485] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:28:21.327 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:21.327 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:21.327 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:21.327 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:21.327 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:21.327 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:21.327 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:21.327 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:21.327 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:21.327 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:21.327 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:21.327 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:21.327 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:21.327 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:21.328 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:21.328 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:21.328 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:21.328 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:21.328 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:21.328 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:21.328 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:21.328 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:21.328 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:21.328 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:28:21.328 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:21.328 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:21.328 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:21.328 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:21.587 19:20:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:21.587 19:20:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:21.587 19:20:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:21.587 19:20:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:21.587 19:20:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:21.587 19:20:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:21.587 19:20:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:28:21.587 19:20:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:21.847 19:20:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:21.847 19:20:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:21.847 19:20:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:21.847 19:20:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:21.847 19:20:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:22.106 19:20:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:22.106 19:20:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:22.106 19:20:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:22.106 19:20:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:22.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:22.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:22.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:22.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:28:22.106 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:28:22.106 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:22.106 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:22.106 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:22.106 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:22.106 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:22.106 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:22.106 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:22.106 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:22.106 ' 00:28:27.382 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:27.382 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:27.382 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:27.382 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:27.382 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:27.382 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:27.382 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:27.382 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:27.382 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:27.382 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:27.382 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:27.382 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:27.382 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:27.382 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:27.382 19:20:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:27.382 19:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:27.382 19:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:27.382 19:20:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 466925 00:28:27.382 19:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 466925 ']' 00:28:27.382 19:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 466925 00:28:27.382 19:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:28:27.382 19:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:27.382 19:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 466925 00:28:27.382 19:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:27.382 19:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:27.382 19:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 466925' 00:28:27.382 killing process with pid 466925 00:28:27.382 19:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 466925 00:28:27.382 19:20:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 466925 00:28:27.641 19:20:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:27.641 19:20:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:27.641 19:20:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 466925 ']' 00:28:27.641 19:20:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 466925 00:28:27.641 19:20:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 466925 ']' 00:28:27.641 19:20:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 466925 00:28:27.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (466925) - No such process 00:28:27.641 19:20:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 466925 is not found' 00:28:27.641 Process with pid 466925 is not found 00:28:27.641 19:20:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:27.641 19:20:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:27.641 19:20:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:27.641 00:28:27.641 real 0m17.172s 00:28:27.641 user 0m37.259s 00:28:27.641 sys 0m0.961s 00:28:27.641 19:20:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:27.641 19:20:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:27.641 ************************************ 00:28:27.641 END TEST spdkcli_nvmf_tcp 00:28:27.641 ************************************ 00:28:27.641 19:20:30 -- common/autotest_common.sh@1142 -- # return 0 00:28:27.641 19:20:30 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:27.641 19:20:30 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:27.641 19:20:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.641 19:20:30 -- common/autotest_common.sh@10 -- # set +x 00:28:27.641 ************************************ 00:28:27.641 START TEST nvmf_identify_passthru 00:28:27.641 ************************************ 00:28:27.641 19:20:30 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:27.641 * Looking for test storage... 00:28:27.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:27.900 19:20:30 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.900 19:20:30 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.900 19:20:30 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.900 19:20:30 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.900 19:20:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.900 19:20:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.900 19:20:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.900 19:20:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:27.900 19:20:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:27.900 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:27.900 19:20:30 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.900 19:20:30 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.900 19:20:30 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.900 19:20:30 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.900 19:20:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.901 19:20:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.901 19:20:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.901 19:20:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:28:27.901 19:20:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.901 19:20:30 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:27.901 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:27.901 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.901 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:27.901 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:27.901 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:27.901 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.901 19:20:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:27.901 19:20:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.901 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:27.901 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:27.901 19:20:30 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:28:27.901 19:20:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:33.175 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:33.175 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:33.175 Found net devices under 0000:86:00.0: cvl_0_0 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:33.175 Found net devices under 0000:86:00.1: cvl_0_1 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.175 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:33.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:28:33.435 00:28:33.435 --- 10.0.0.2 ping statistics --- 00:28:33.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.435 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:28:33.435 00:28:33.435 --- 10.0.0.1 ping statistics --- 00:28:33.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.435 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:33.435 19:20:35 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:33.435 19:20:35 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:33.435 19:20:35 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:33.435 19:20:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:33.435 19:20:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:33.435 19:20:35 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:28:33.435 19:20:35 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:28:33.435 19:20:35 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:28:33.435 19:20:35 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:28:33.435 19:20:35 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:33.435 19:20:35 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:28:33.435 19:20:35 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:33.435 19:20:35 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:33.435 19:20:35 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:33.695 19:20:36 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:28:33.695 19:20:36 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:28:33.695 19:20:36 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:28:33.695 19:20:36 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:28:33.695 19:20:36 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:28:33.695 19:20:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:33.695 19:20:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:33.695 19:20:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:33.695 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.888 19:20:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:28:37.888 19:20:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:37.888 19:20:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:37.888 19:20:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:37.888 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.081 19:20:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:42.081 19:20:44 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:42.081 19:20:44 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:42.081 19:20:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:42.081 19:20:44 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:42.081 19:20:44 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:42.081 19:20:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:42.081 19:20:44 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=474180 00:28:42.081 19:20:44 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:42.081 19:20:44 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:42.081 19:20:44 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 474180 00:28:42.081 19:20:44 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 474180 ']' 00:28:42.081 19:20:44 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.081 19:20:44 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:42.081 19:20:44 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.081 19:20:44 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:42.081 19:20:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:42.081 [2024-07-12 19:20:44.405759] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:28:42.081 [2024-07-12 19:20:44.405806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.081 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.081 [2024-07-12 19:20:44.479561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.081 [2024-07-12 19:20:44.558992] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.081 [2024-07-12 19:20:44.559028] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.081 [2024-07-12 19:20:44.559034] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.081 [2024-07-12 19:20:44.559040] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.081 [2024-07-12 19:20:44.559044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.081 [2024-07-12 19:20:44.559112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.081 [2024-07-12 19:20:44.559216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.081 [2024-07-12 19:20:44.559327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.081 [2024-07-12 19:20:44.559328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:28:43.018 19:20:45 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:43.018 INFO: Log level set to 20 00:28:43.018 INFO: Requests: 00:28:43.018 { 00:28:43.018 "jsonrpc": "2.0", 00:28:43.018 "method": "nvmf_set_config", 00:28:43.018 "id": 1, 00:28:43.018 "params": { 00:28:43.018 "admin_cmd_passthru": { 00:28:43.018 "identify_ctrlr": true 00:28:43.018 } 00:28:43.018 } 00:28:43.018 } 00:28:43.018 00:28:43.018 INFO: response: 00:28:43.018 { 00:28:43.018 "jsonrpc": "2.0", 00:28:43.018 "id": 1, 00:28:43.018 "result": true 00:28:43.018 } 00:28:43.018 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.018 19:20:45 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:43.018 INFO: Setting log level to 20 00:28:43.018 INFO: Setting log level to 20 00:28:43.018 INFO: Log level set to 20 00:28:43.018 INFO: Log level set to 20 00:28:43.018 INFO: Requests: 00:28:43.018 { 00:28:43.018 "jsonrpc": "2.0", 00:28:43.018 "method": "framework_start_init", 00:28:43.018 "id": 1 00:28:43.018 } 00:28:43.018 00:28:43.018 INFO: Requests: 00:28:43.018 { 00:28:43.018 "jsonrpc": "2.0", 00:28:43.018 "method": "framework_start_init", 00:28:43.018 "id": 1 00:28:43.018 } 00:28:43.018 00:28:43.018 [2024-07-12 19:20:45.320723] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:43.018 INFO: response: 00:28:43.018 { 00:28:43.018 "jsonrpc": "2.0", 00:28:43.018 "id": 1, 00:28:43.018 "result": true 00:28:43.018 } 00:28:43.018 00:28:43.018 INFO: response: 00:28:43.018 { 00:28:43.018 "jsonrpc": "2.0", 00:28:43.018 "id": 1, 00:28:43.018 "result": true 00:28:43.018 } 00:28:43.018 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.018 19:20:45 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:43.018 INFO: Setting log level to 40 00:28:43.018 INFO: Setting log level to 40 00:28:43.018 INFO: Setting log level to 40 00:28:43.018 [2024-07-12 19:20:45.334267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.018 19:20:45 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:43.018 19:20:45 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.018 19:20:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:46.309 Nvme0n1 00:28:46.309 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:46.309 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.309 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:46.309 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:46.309 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.309 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:46.309 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:46.309 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.309 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:46.309 [2024-07-12 19:20:48.236516] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.309 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:46.309 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.309 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:46.309 [ 00:28:46.309 { 00:28:46.309 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:46.309 "subtype": "Discovery", 00:28:46.309 "listen_addresses": [], 00:28:46.309 "allow_any_host": true, 00:28:46.309 "hosts": [] 00:28:46.309 }, 00:28:46.309 { 00:28:46.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.309 "subtype": "NVMe", 00:28:46.309 "listen_addresses": [ 00:28:46.309 { 00:28:46.309 "trtype": "TCP", 00:28:46.309 "adrfam": "IPv4", 00:28:46.309 "traddr": "10.0.0.2", 00:28:46.309 "trsvcid": "4420" 00:28:46.309 } 00:28:46.309 ], 00:28:46.309 "allow_any_host": true, 00:28:46.309 "hosts": [], 00:28:46.309 "serial_number": "SPDK00000000000001", 00:28:46.309 "model_number": "SPDK bdev Controller", 00:28:46.309 "max_namespaces": 1, 00:28:46.309 "min_cntlid": 1, 00:28:46.309 "max_cntlid": 65519, 00:28:46.309 "namespaces": [ 00:28:46.309 { 00:28:46.309 "nsid": 1, 00:28:46.309 "bdev_name": "Nvme0n1", 00:28:46.309 "name": "Nvme0n1", 00:28:46.309 "nguid": "A35D44DACCB84302989D9539C64A555E", 00:28:46.309 "uuid": "a35d44da-ccb8-4302-989d-9539c64a555e" 00:28:46.309 } 00:28:46.309 ] 00:28:46.309 } 00:28:46.309 ] 00:28:46.309 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:46.309 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:46.309 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:46.309 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:46.310 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.310 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:46.310 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.310 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:46.310 19:20:48 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:46.310 19:20:48 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:46.310 19:20:48 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:46.310 19:20:48 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:46.310 19:20:48 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:46.310 19:20:48 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:46.310 19:20:48 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:46.310 rmmod nvme_tcp 00:28:46.310 rmmod nvme_fabrics 00:28:46.310 rmmod nvme_keyring 00:28:46.310 19:20:48 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:46.310 19:20:48 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:46.310 19:20:48 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:46.310 19:20:48 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 474180 ']' 00:28:46.310 19:20:48 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 474180 00:28:46.310 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 474180 ']' 00:28:46.310 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 474180 00:28:46.310 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:28:46.310 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:46.310 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 474180 00:28:46.310 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:46.310 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:46.310 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 474180' 00:28:46.310 killing process with pid 474180 00:28:46.310 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 474180 00:28:46.310 19:20:48 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 474180 00:28:47.684 19:20:50 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:47.684 19:20:50 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:47.684 19:20:50 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:47.685 19:20:50 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:47.685 19:20:50 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:47.685 19:20:50 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.685 19:20:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:47.685 19:20:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.640 19:20:52 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:49.640 00:28:49.640 real 0m22.073s 00:28:49.640 user 0m29.753s 00:28:49.640 sys 0m5.106s 00:28:49.640 19:20:52 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:49.640 19:20:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:49.640 ************************************ 00:28:49.640 END TEST nvmf_identify_passthru 00:28:49.640 ************************************ 00:28:49.898 19:20:52 -- common/autotest_common.sh@1142 -- # return 0 00:28:49.898 19:20:52 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:49.898 19:20:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:49.898 19:20:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:49.898 19:20:52 -- common/autotest_common.sh@10 -- # set +x 00:28:49.898 ************************************ 00:28:49.898 START TEST nvmf_dif 00:28:49.898 ************************************ 00:28:49.898 19:20:52 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:49.898 * Looking for test storage... 00:28:49.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:49.899 19:20:52 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.899 19:20:52 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.899 19:20:52 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.899 19:20:52 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.899 19:20:52 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.899 19:20:52 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.899 19:20:52 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.899 19:20:52 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:49.899 19:20:52 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:49.899 19:20:52 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:49.899 19:20:52 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:49.899 19:20:52 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:49.899 19:20:52 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:49.899 19:20:52 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.899 19:20:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:49.899 19:20:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:49.899 19:20:52 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:49.899 19:20:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:56.475 19:20:57 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:56.476 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:56.476 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:56.476 Found net devices under 0000:86:00.0: cvl_0_0 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:56.476 Found net devices under 0000:86:00.1: cvl_0_1 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.476 19:20:57 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:56.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:28:56.476 00:28:56.476 --- 10.0.0.2 ping statistics --- 00:28:56.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.476 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:28:56.476 19:20:58 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:28:56.476 00:28:56.476 --- 10.0.0.1 ping statistics --- 00:28:56.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.476 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:28:56.476 19:20:58 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.476 19:20:58 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:56.476 19:20:58 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:56.476 19:20:58 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:58.385 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:58.385 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:58.385 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:58.385 19:21:00 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.385 19:21:00 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:58.385 19:21:00 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:58.385 19:21:00 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.385 19:21:00 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:58.385 19:21:00 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:58.385 19:21:00 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:58.385 19:21:00 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:58.385 19:21:00 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:58.385 19:21:00 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:58.385 19:21:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:58.385 19:21:00 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=479716 00:28:58.385 19:21:00 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 479716 00:28:58.385 19:21:00 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:58.385 19:21:00 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 479716 ']' 00:28:58.385 19:21:00 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.385 19:21:00 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:58.385 19:21:00 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.385 19:21:00 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:58.385 19:21:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:58.385 [2024-07-12 19:21:00.920561] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:28:58.385 [2024-07-12 19:21:00.920608] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.385 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.644 [2024-07-12 19:21:00.993680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.644 [2024-07-12 19:21:01.071271] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.644 [2024-07-12 19:21:01.071309] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.644 [2024-07-12 19:21:01.071316] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.644 [2024-07-12 19:21:01.071322] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.644 [2024-07-12 19:21:01.071327] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.644 [2024-07-12 19:21:01.071346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.213 19:21:01 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:59.213 19:21:01 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:28:59.213 19:21:01 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:59.213 19:21:01 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:59.213 19:21:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:59.213 19:21:01 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.213 19:21:01 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:59.213 19:21:01 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:59.213 19:21:01 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.213 19:21:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:59.213 [2024-07-12 19:21:01.758404] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.213 19:21:01 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.213 19:21:01 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:59.213 19:21:01 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:59.213 19:21:01 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:59.213 19:21:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:59.473 ************************************ 00:28:59.473 START TEST fio_dif_1_default 00:28:59.473 ************************************ 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:59.473 bdev_null0 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:59.473 [2024-07-12 19:21:01.826670] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:59.473 { 00:28:59.473 "params": { 00:28:59.473 "name": "Nvme$subsystem", 00:28:59.473 "trtype": "$TEST_TRANSPORT", 00:28:59.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.473 "adrfam": "ipv4", 00:28:59.473 "trsvcid": "$NVMF_PORT", 00:28:59.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.473 "hdgst": ${hdgst:-false}, 00:28:59.473 "ddgst": ${ddgst:-false} 00:28:59.473 }, 00:28:59.473 "method": "bdev_nvme_attach_controller" 00:28:59.473 } 00:28:59.473 EOF 00:28:59.473 )") 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:59.473 "params": { 00:28:59.473 "name": "Nvme0", 00:28:59.473 "trtype": "tcp", 00:28:59.473 "traddr": "10.0.0.2", 00:28:59.473 "adrfam": "ipv4", 00:28:59.473 "trsvcid": "4420", 00:28:59.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.473 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:59.473 "hdgst": false, 00:28:59.473 "ddgst": false 00:28:59.473 }, 00:28:59.473 "method": "bdev_nvme_attach_controller" 00:28:59.473 }' 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:59.473 19:21:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:59.732 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:59.732 fio-3.35 00:28:59.732 Starting 1 thread 00:28:59.732 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.939 00:29:11.939 filename0: (groupid=0, jobs=1): err= 0: pid=480318: Fri Jul 12 19:21:12 2024 00:29:11.939 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10020msec) 00:29:11.939 slat (nsec): min=6060, max=36614, avg=6429.63, stdev=1523.02 00:29:11.939 clat (usec): min=40841, max=43787, avg=41558.10, stdev=509.89 00:29:11.939 lat (usec): min=40847, max=43812, avg=41564.53, stdev=509.98 00:29:11.939 clat percentiles (usec): 00:29:11.939 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:11.939 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:29:11.939 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:11.939 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:29:11.939 | 99.99th=[43779] 00:29:11.939 bw ( KiB/s): min= 352, max= 416, per=99.78%, avg=384.00, stdev=10.38, samples=20 00:29:11.939 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:29:11.939 lat (msec) : 50=100.00% 00:29:11.939 cpu : usr=94.25%, sys=5.46%, ctx=14, majf=0, minf=244 00:29:11.939 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:11.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.939 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:11.939 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:11.939 00:29:11.939 Run status group 0 (all jobs): 00:29:11.939 READ: bw=385KiB/s (394kB/s), 385KiB/s-385KiB/s (394kB/s-394kB/s), io=3856KiB (3949kB), run=10020-10020msec 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.939 00:29:11.939 real 0m11.115s 00:29:11.939 user 0m15.934s 00:29:11.939 sys 0m0.865s 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:11.939 ************************************ 00:29:11.939 END TEST fio_dif_1_default 00:29:11.939 ************************************ 00:29:11.939 19:21:12 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:11.939 19:21:12 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:11.939 19:21:12 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:11.939 19:21:12 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:11.939 19:21:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:11.939 ************************************ 00:29:11.939 START TEST fio_dif_1_multi_subsystems 00:29:11.939 ************************************ 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:11.939 bdev_null0 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.939 19:21:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:11.939 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.939 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:11.939 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.939 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:11.939 [2024-07-12 19:21:13.008155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.939 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.939 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:11.939 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:11.939 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:11.939 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:11.939 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.939 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:11.939 bdev_null1 00:29:11.939 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.939 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:11.939 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:11.940 { 00:29:11.940 "params": { 00:29:11.940 "name": "Nvme$subsystem", 00:29:11.940 "trtype": "$TEST_TRANSPORT", 00:29:11.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.940 "adrfam": "ipv4", 00:29:11.940 "trsvcid": "$NVMF_PORT", 00:29:11.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.940 "hdgst": ${hdgst:-false}, 00:29:11.940 "ddgst": ${ddgst:-false} 00:29:11.940 }, 00:29:11.940 "method": "bdev_nvme_attach_controller" 00:29:11.940 } 00:29:11.940 EOF 00:29:11.940 )") 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:11.940 { 00:29:11.940 "params": { 00:29:11.940 "name": "Nvme$subsystem", 00:29:11.940 "trtype": "$TEST_TRANSPORT", 00:29:11.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.940 "adrfam": "ipv4", 00:29:11.940 "trsvcid": "$NVMF_PORT", 00:29:11.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.940 "hdgst": ${hdgst:-false}, 00:29:11.940 "ddgst": ${ddgst:-false} 00:29:11.940 }, 00:29:11.940 "method": "bdev_nvme_attach_controller" 00:29:11.940 } 00:29:11.940 EOF 00:29:11.940 )") 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:11.940 "params": { 00:29:11.940 "name": "Nvme0", 00:29:11.940 "trtype": "tcp", 00:29:11.940 "traddr": "10.0.0.2", 00:29:11.940 "adrfam": "ipv4", 00:29:11.940 "trsvcid": "4420", 00:29:11.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:11.940 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:11.940 "hdgst": false, 00:29:11.940 "ddgst": false 00:29:11.940 }, 00:29:11.940 "method": "bdev_nvme_attach_controller" 00:29:11.940 },{ 00:29:11.940 "params": { 00:29:11.940 "name": "Nvme1", 00:29:11.940 "trtype": "tcp", 00:29:11.940 "traddr": "10.0.0.2", 00:29:11.940 "adrfam": "ipv4", 00:29:11.940 "trsvcid": "4420", 00:29:11.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:11.940 "hdgst": false, 00:29:11.940 "ddgst": false 00:29:11.940 }, 00:29:11.940 "method": "bdev_nvme_attach_controller" 00:29:11.940 }' 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:11.940 19:21:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:11.940 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:11.940 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:11.940 fio-3.35 00:29:11.940 Starting 2 threads 00:29:11.940 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.916 00:29:21.916 filename0: (groupid=0, jobs=1): err= 0: pid=482717: Fri Jul 12 19:21:24 2024 00:29:21.916 read: IOPS=191, BW=766KiB/s (785kB/s)(7680KiB/10023msec) 00:29:21.916 slat (nsec): min=6038, max=61244, avg=7971.49, stdev=4220.31 00:29:21.916 clat (usec): min=391, max=42567, avg=20857.75, stdev=20495.32 00:29:21.916 lat (usec): min=398, max=42574, avg=20865.72, stdev=20494.29 00:29:21.916 clat percentiles (usec): 00:29:21.916 | 1.00th=[ 404], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 490], 00:29:21.916 | 30.00th=[ 611], 40.00th=[ 627], 50.00th=[ 799], 60.00th=[41157], 00:29:21.916 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:29:21.916 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:29:21.916 | 99.99th=[42730] 00:29:21.916 bw ( KiB/s): min= 704, max= 896, per=66.28%, avg=766.40, stdev=35.17, samples=20 00:29:21.916 iops : min= 176, max= 224, avg=191.60, stdev= 8.79, samples=20 00:29:21.916 lat (usec) : 500=21.25%, 750=28.23%, 1000=0.94% 00:29:21.916 lat (msec) : 50=49.58% 00:29:21.916 cpu : usr=97.96%, sys=1.75%, ctx=16, majf=0, minf=174 00:29:21.916 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:21.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.916 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.916 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:21.916 filename1: (groupid=0, jobs=1): err= 0: pid=482718: Fri Jul 12 19:21:24 2024 00:29:21.916 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10023msec) 00:29:21.916 slat (nsec): min=6035, max=42320, avg=9614.18, stdev=6791.13 00:29:21.916 clat (usec): min=575, max=42379, avg=41046.45, stdev=2631.24 00:29:21.916 lat (usec): min=581, max=42407, avg=41056.06, stdev=2631.29 00:29:21.916 clat percentiles (usec): 00:29:21.916 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:29:21.916 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:21.916 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:29:21.916 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:21.916 | 99.99th=[42206] 00:29:21.916 bw ( KiB/s): min= 384, max= 416, per=33.57%, avg=388.80, stdev=11.72, samples=20 00:29:21.916 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:29:21.916 lat (usec) : 750=0.41% 00:29:21.916 lat (msec) : 50=99.59% 00:29:21.916 cpu : usr=98.29%, sys=1.45%, ctx=14, majf=0, minf=50 00:29:21.916 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:21.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.916 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.916 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:21.916 00:29:21.916 Run status group 0 (all jobs): 00:29:21.916 READ: bw=1156KiB/s (1183kB/s), 390KiB/s-766KiB/s (399kB/s-785kB/s), io=11.3MiB (11.9MB), run=10023-10023msec 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:21.916 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.917 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:22.176 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.176 00:29:22.176 real 0m11.511s 00:29:22.176 user 0m26.603s 00:29:22.176 sys 0m0.610s 00:29:22.176 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:22.176 19:21:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:22.176 ************************************ 00:29:22.176 END TEST fio_dif_1_multi_subsystems 00:29:22.176 ************************************ 00:29:22.176 19:21:24 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:22.176 19:21:24 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:22.176 19:21:24 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:22.176 19:21:24 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:22.176 19:21:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:22.176 ************************************ 00:29:22.176 START TEST fio_dif_rand_params 00:29:22.176 ************************************ 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.176 bdev_null0 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:22.176 [2024-07-12 19:21:24.591046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:22.176 { 00:29:22.176 "params": { 00:29:22.176 "name": "Nvme$subsystem", 00:29:22.176 "trtype": "$TEST_TRANSPORT", 00:29:22.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.176 "adrfam": "ipv4", 00:29:22.176 "trsvcid": "$NVMF_PORT", 00:29:22.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.176 "hdgst": ${hdgst:-false}, 00:29:22.176 "ddgst": ${ddgst:-false} 00:29:22.176 }, 00:29:22.176 "method": "bdev_nvme_attach_controller" 00:29:22.176 } 00:29:22.176 EOF 00:29:22.176 )") 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:22.176 19:21:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:22.176 "params": { 00:29:22.176 "name": "Nvme0", 00:29:22.177 "trtype": "tcp", 00:29:22.177 "traddr": "10.0.0.2", 00:29:22.177 "adrfam": "ipv4", 00:29:22.177 "trsvcid": "4420", 00:29:22.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:22.177 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:22.177 "hdgst": false, 00:29:22.177 "ddgst": false 00:29:22.177 }, 00:29:22.177 "method": "bdev_nvme_attach_controller" 00:29:22.177 }' 00:29:22.177 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:22.177 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:22.177 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:22.177 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:22.177 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:22.177 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:22.177 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:22.177 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:22.177 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:22.177 19:21:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:22.436 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:22.436 ... 00:29:22.436 fio-3.35 00:29:22.436 Starting 3 threads 00:29:22.436 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.001 00:29:29.001 filename0: (groupid=0, jobs=1): err= 0: pid=484565: Fri Jul 12 19:21:30 2024 00:29:29.001 read: IOPS=312, BW=39.1MiB/s (41.0MB/s)(196MiB/5008msec) 00:29:29.001 slat (nsec): min=6404, max=25912, avg=11341.99, stdev=2161.52 00:29:29.001 clat (usec): min=3343, max=50854, avg=9572.48, stdev=6850.71 00:29:29.001 lat (usec): min=3350, max=50868, avg=9583.82, stdev=6850.72 00:29:29.001 clat percentiles (usec): 00:29:29.001 | 1.00th=[ 3785], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 7242], 00:29:29.001 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 8979], 00:29:29.002 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10814], 00:29:29.002 | 99.00th=[49546], 99.50th=[49546], 99.90th=[50070], 99.95th=[50594], 00:29:29.002 | 99.99th=[50594] 00:29:29.002 bw ( KiB/s): min=29440, max=46848, per=34.08%, avg=40038.40, stdev=5676.05, samples=10 00:29:29.002 iops : min= 230, max= 366, avg=312.80, stdev=44.34, samples=10 00:29:29.002 lat (msec) : 4=1.47%, 10=84.30%, 20=11.36%, 50=2.68%, 100=0.19% 00:29:29.002 cpu : usr=94.73%, sys=4.97%, ctx=8, majf=0, minf=35 00:29:29.002 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:29.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.002 issued rwts: total=1567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.002 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:29.002 filename0: (groupid=0, jobs=1): err= 0: pid=484567: Fri Jul 12 19:21:30 2024 00:29:29.002 read: IOPS=305, BW=38.2MiB/s (40.1MB/s)(193MiB/5043msec) 00:29:29.002 slat (nsec): min=6339, max=23106, avg=10983.05, stdev=2185.95 00:29:29.002 clat (usec): min=3260, max=52451, avg=9770.78, stdev=5962.38 00:29:29.002 lat (usec): min=3267, max=52463, avg=9781.76, stdev=5962.53 00:29:29.002 clat percentiles (usec): 00:29:29.002 | 1.00th=[ 3818], 5.00th=[ 5473], 10.00th=[ 6259], 20.00th=[ 7046], 00:29:29.002 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9765], 00:29:29.002 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11469], 95.00th=[11994], 00:29:29.002 | 99.00th=[48497], 99.50th=[50594], 99.90th=[51119], 99.95th=[52691], 00:29:29.002 | 99.99th=[52691] 00:29:29.002 bw ( KiB/s): min=33024, max=46848, per=33.55%, avg=39424.00, stdev=4725.03, samples=10 00:29:29.002 iops : min= 258, max= 366, avg=308.00, stdev=36.91, samples=10 00:29:29.002 lat (msec) : 4=1.75%, 10=64.40%, 20=31.78%, 50=1.49%, 100=0.58% 00:29:29.002 cpu : usr=94.80%, sys=4.90%, ctx=10, majf=0, minf=134 00:29:29.002 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:29.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.002 issued rwts: total=1542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.002 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:29.002 filename0: (groupid=0, jobs=1): err= 0: pid=484568: Fri Jul 12 19:21:30 2024 00:29:29.002 read: IOPS=303, BW=38.0MiB/s (39.8MB/s)(190MiB/5003msec) 00:29:29.002 slat (nsec): min=6343, max=25131, avg=11058.43, stdev=2067.30 00:29:29.002 clat (usec): min=3553, max=90096, avg=9860.57, stdev=6488.00 00:29:29.002 lat (usec): min=3561, max=90108, avg=9871.63, stdev=6487.98 00:29:29.002 clat percentiles (usec): 00:29:29.002 | 1.00th=[ 4424], 5.00th=[ 5866], 10.00th=[ 6325], 20.00th=[ 7308], 00:29:29.002 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9634], 00:29:29.002 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11207], 95.00th=[11863], 00:29:29.002 | 99.00th=[48497], 99.50th=[49546], 99.90th=[53216], 99.95th=[89654], 00:29:29.002 | 99.99th=[89654] 00:29:29.002 bw ( KiB/s): min=28672, max=43264, per=32.63%, avg=38343.11, stdev=4497.84, samples=9 00:29:29.002 iops : min= 224, max= 338, avg=299.56, stdev=35.14, samples=9 00:29:29.002 lat (msec) : 4=0.33%, 10=68.82%, 20=28.55%, 50=1.84%, 100=0.46% 00:29:29.002 cpu : usr=95.20%, sys=4.50%, ctx=10, majf=0, minf=84 00:29:29.002 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:29.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.002 issued rwts: total=1520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.002 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:29.002 00:29:29.002 Run status group 0 (all jobs): 00:29:29.002 READ: bw=115MiB/s (120MB/s), 38.0MiB/s-39.1MiB/s (39.8MB/s-41.0MB/s), io=579MiB (607MB), run=5003-5043msec 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.002 bdev_null0 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.002 [2024-07-12 19:21:30.795432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.002 bdev_null1 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.002 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.003 bdev_null2 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.003 { 00:29:29.003 "params": { 00:29:29.003 "name": "Nvme$subsystem", 00:29:29.003 "trtype": "$TEST_TRANSPORT", 00:29:29.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.003 "adrfam": "ipv4", 00:29:29.003 "trsvcid": "$NVMF_PORT", 00:29:29.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.003 "hdgst": ${hdgst:-false}, 00:29:29.003 "ddgst": ${ddgst:-false} 00:29:29.003 }, 00:29:29.003 "method": "bdev_nvme_attach_controller" 00:29:29.003 } 00:29:29.003 EOF 00:29:29.003 )") 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.003 { 00:29:29.003 "params": { 00:29:29.003 "name": "Nvme$subsystem", 00:29:29.003 "trtype": "$TEST_TRANSPORT", 00:29:29.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.003 "adrfam": "ipv4", 00:29:29.003 "trsvcid": "$NVMF_PORT", 00:29:29.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.003 "hdgst": ${hdgst:-false}, 00:29:29.003 "ddgst": ${ddgst:-false} 00:29:29.003 }, 00:29:29.003 "method": "bdev_nvme_attach_controller" 00:29:29.003 } 00:29:29.003 EOF 00:29:29.003 )") 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.003 { 00:29:29.003 "params": { 00:29:29.003 "name": "Nvme$subsystem", 00:29:29.003 "trtype": "$TEST_TRANSPORT", 00:29:29.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.003 "adrfam": "ipv4", 00:29:29.003 "trsvcid": "$NVMF_PORT", 00:29:29.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.003 "hdgst": ${hdgst:-false}, 00:29:29.003 "ddgst": ${ddgst:-false} 00:29:29.003 }, 00:29:29.003 "method": "bdev_nvme_attach_controller" 00:29:29.003 } 00:29:29.003 EOF 00:29:29.003 )") 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:29.003 19:21:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:29.003 "params": { 00:29:29.003 "name": "Nvme0", 00:29:29.003 "trtype": "tcp", 00:29:29.003 "traddr": "10.0.0.2", 00:29:29.003 "adrfam": "ipv4", 00:29:29.003 "trsvcid": "4420", 00:29:29.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:29.003 "hdgst": false, 00:29:29.003 "ddgst": false 00:29:29.003 }, 00:29:29.003 "method": "bdev_nvme_attach_controller" 00:29:29.003 },{ 00:29:29.003 "params": { 00:29:29.003 "name": "Nvme1", 00:29:29.003 "trtype": "tcp", 00:29:29.003 "traddr": "10.0.0.2", 00:29:29.003 "adrfam": "ipv4", 00:29:29.003 "trsvcid": "4420", 00:29:29.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:29.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:29.003 "hdgst": false, 00:29:29.003 "ddgst": false 00:29:29.003 }, 00:29:29.003 "method": "bdev_nvme_attach_controller" 00:29:29.003 },{ 00:29:29.003 "params": { 00:29:29.003 "name": "Nvme2", 00:29:29.003 "trtype": "tcp", 00:29:29.003 "traddr": "10.0.0.2", 00:29:29.003 "adrfam": "ipv4", 00:29:29.003 "trsvcid": "4420", 00:29:29.003 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:29.003 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:29.003 "hdgst": false, 00:29:29.003 "ddgst": false 00:29:29.003 }, 00:29:29.004 "method": "bdev_nvme_attach_controller" 00:29:29.004 }' 00:29:29.004 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:29.004 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:29.004 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:29.004 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:29.004 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:29.004 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:29.004 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:29.004 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:29.004 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:29.004 19:21:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:29.004 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:29.004 ... 00:29:29.004 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:29.004 ... 00:29:29.004 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:29.004 ... 00:29:29.004 fio-3.35 00:29:29.004 Starting 24 threads 00:29:29.004 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.229 00:29:41.229 filename0: (groupid=0, jobs=1): err= 0: pid=485739: Fri Jul 12 19:21:42 2024 00:29:41.229 read: IOPS=591, BW=2365KiB/s (2422kB/s)(23.2MiB/10024msec) 00:29:41.229 slat (nsec): min=6850, max=77855, avg=15075.04, stdev=6912.97 00:29:41.229 clat (usec): min=6121, max=44156, avg=26937.63, stdev=3869.02 00:29:41.229 lat (usec): min=6130, max=44201, avg=26952.70, stdev=3870.42 00:29:41.229 clat percentiles (usec): 00:29:41.229 | 1.00th=[ 6259], 5.00th=[17171], 10.00th=[25822], 20.00th=[27657], 00:29:41.229 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:41.229 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:29:41.229 | 99.00th=[29492], 99.50th=[30016], 99.90th=[42730], 99.95th=[42730], 00:29:41.229 | 99.99th=[44303] 00:29:41.229 bw ( KiB/s): min= 2176, max= 2888, per=4.29%, avg=2364.40, stdev=203.26, samples=20 00:29:41.229 iops : min= 544, max= 722, avg=591.10, stdev=50.81, samples=20 00:29:41.229 lat (msec) : 10=2.24%, 20=3.04%, 50=94.72% 00:29:41.229 cpu : usr=98.82%, sys=0.77%, ctx=15, majf=0, minf=32 00:29:41.229 IO depths : 1=5.2%, 2=10.6%, 4=22.3%, 8=54.5%, 16=7.4%, 32=0.0%, >=64=0.0% 00:29:41.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.229 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.229 issued rwts: total=5927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.230 filename0: (groupid=0, jobs=1): err= 0: pid=485740: Fri Jul 12 19:21:42 2024 00:29:41.230 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:29:41.230 slat (nsec): min=6889, max=94118, avg=36147.69, stdev=18437.28 00:29:41.230 clat (usec): min=3745, max=58503, avg=27833.30, stdev=2107.96 00:29:41.230 lat (usec): min=3759, max=58551, avg=27869.45, stdev=2107.33 00:29:41.230 clat percentiles (usec): 00:29:41.230 | 1.00th=[26608], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:29:41.230 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:41.230 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:29:41.230 | 99.00th=[29754], 99.50th=[30278], 99.90th=[58459], 99.95th=[58459], 00:29:41.230 | 99.99th=[58459] 00:29:41.230 bw ( KiB/s): min= 2052, max= 2304, per=4.12%, avg=2270.53, stdev=71.25, samples=19 00:29:41.230 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:29:41.230 lat (msec) : 4=0.28%, 50=99.44%, 100=0.28% 00:29:41.230 cpu : usr=98.89%, sys=0.75%, ctx=15, majf=0, minf=25 00:29:41.230 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:41.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.230 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.230 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.230 filename0: (groupid=0, jobs=1): err= 0: pid=485741: Fri Jul 12 19:21:42 2024 00:29:41.230 read: IOPS=566, BW=2268KiB/s (2322kB/s)(22.2MiB/10005msec) 00:29:41.230 slat (nsec): min=6874, max=93788, avg=43336.94, stdev=18245.28 00:29:41.230 clat (usec): min=17691, max=75789, avg=27843.04, stdev=2872.78 00:29:41.230 lat (usec): min=17712, max=75809, avg=27886.38, stdev=2871.08 00:29:41.230 clat percentiles (usec): 00:29:41.230 | 1.00th=[25035], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:41.230 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:41.230 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:41.230 | 99.00th=[31327], 99.50th=[42730], 99.90th=[76022], 99.95th=[76022], 00:29:41.230 | 99.99th=[76022] 00:29:41.230 bw ( KiB/s): min= 1968, max= 2304, per=4.10%, avg=2260.21, stdev=85.64, samples=19 00:29:41.230 iops : min= 492, max= 576, avg=565.05, stdev=21.41, samples=19 00:29:41.230 lat (msec) : 20=0.32%, 50=99.40%, 100=0.28% 00:29:41.230 cpu : usr=98.89%, sys=0.73%, ctx=12, majf=0, minf=16 00:29:41.230 IO depths : 1=5.7%, 2=11.5%, 4=23.6%, 8=52.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:29:41.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.230 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.230 issued rwts: total=5672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.230 filename0: (groupid=0, jobs=1): err= 0: pid=485742: Fri Jul 12 19:21:42 2024 00:29:41.230 read: IOPS=568, BW=2276KiB/s (2330kB/s)(22.2MiB/10002msec) 00:29:41.230 slat (nsec): min=5968, max=93618, avg=43694.02, stdev=18641.61 00:29:41.230 clat (usec): min=14684, max=73183, avg=27729.57, stdev=2823.14 00:29:41.230 lat (usec): min=14691, max=73198, avg=27773.26, stdev=2822.22 00:29:41.230 clat percentiles (usec): 00:29:41.230 | 1.00th=[21627], 5.00th=[27132], 10.00th=[27132], 20.00th=[27132], 00:29:41.230 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:29:41.230 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:41.230 | 99.00th=[31327], 99.50th=[37487], 99.90th=[72877], 99.95th=[72877], 00:29:41.230 | 99.99th=[72877] 00:29:41.230 bw ( KiB/s): min= 2064, max= 2368, per=4.12%, avg=2274.53, stdev=72.40, samples=19 00:29:41.230 iops : min= 516, max= 592, avg=568.63, stdev=18.10, samples=19 00:29:41.230 lat (msec) : 20=0.63%, 50=99.09%, 100=0.28% 00:29:41.230 cpu : usr=98.89%, sys=0.74%, ctx=13, majf=0, minf=22 00:29:41.230 IO depths : 1=5.9%, 2=11.8%, 4=23.8%, 8=51.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:29:41.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.230 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.230 issued rwts: total=5690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.230 filename0: (groupid=0, jobs=1): err= 0: pid=485743: Fri Jul 12 19:21:42 2024 00:29:41.230 read: IOPS=569, BW=2279KiB/s (2334kB/s)(22.3MiB/10025msec) 00:29:41.230 slat (nsec): min=7053, max=86588, avg=20445.30, stdev=8064.34 00:29:41.230 clat (usec): min=16718, max=43276, avg=27913.55, stdev=1085.47 00:29:41.230 lat (usec): min=16742, max=43296, avg=27934.00, stdev=1084.91 00:29:41.230 clat percentiles (usec): 00:29:41.230 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:29:41.230 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:41.230 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:29:41.230 | 99.00th=[28967], 99.50th=[30016], 99.90th=[43254], 99.95th=[43254], 00:29:41.230 | 99.99th=[43254] 00:29:41.230 bw ( KiB/s): min= 2176, max= 2304, per=4.13%, avg=2278.40, stdev=52.53, samples=20 00:29:41.230 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:29:41.230 lat (msec) : 20=0.32%, 50=99.68% 00:29:41.230 cpu : usr=98.92%, sys=0.71%, ctx=12, majf=0, minf=29 00:29:41.230 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:41.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.230 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.230 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.230 filename0: (groupid=0, jobs=1): err= 0: pid=485744: Fri Jul 12 19:21:42 2024 00:29:41.230 read: IOPS=613, BW=2453KiB/s (2511kB/s)(24.0MiB/10006msec) 00:29:41.230 slat (nsec): min=6843, max=80757, avg=11292.94, stdev=5792.93 00:29:41.230 clat (usec): min=874, max=41060, avg=26003.36, stdev=6370.75 00:29:41.230 lat (usec): min=881, max=41117, avg=26014.66, stdev=6370.98 00:29:41.230 clat percentiles (usec): 00:29:41.230 | 1.00th=[ 1369], 5.00th=[ 5145], 10.00th=[24773], 20.00th=[27657], 00:29:41.230 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:41.230 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:29:41.230 | 99.00th=[29230], 99.50th=[30016], 99.90th=[40633], 99.95th=[40633], 00:29:41.230 | 99.99th=[41157] 00:29:41.230 bw ( KiB/s): min= 2176, max= 4208, per=4.44%, avg=2447.60, stdev=447.19, samples=20 00:29:41.230 iops : min= 544, max= 1052, avg=611.90, stdev=111.80, samples=20 00:29:41.230 lat (usec) : 1000=0.11% 00:29:41.230 lat (msec) : 2=3.65%, 4=0.26%, 10=2.79%, 20=1.35%, 50=91.83% 00:29:41.230 cpu : usr=98.76%, sys=0.86%, ctx=10, majf=0, minf=35 00:29:41.230 IO depths : 1=5.0%, 2=10.3%, 4=21.5%, 8=55.6%, 16=7.6%, 32=0.0%, >=64=0.0% 00:29:41.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.230 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.231 issued rwts: total=6135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.231 filename0: (groupid=0, jobs=1): err= 0: pid=485745: Fri Jul 12 19:21:42 2024 00:29:41.231 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:29:41.231 slat (nsec): min=6852, max=45348, avg=16450.07, stdev=4531.38 00:29:41.231 clat (usec): min=4881, max=58863, avg=27944.48, stdev=2065.52 00:29:41.231 lat (usec): min=4888, max=58876, avg=27960.93, stdev=2066.02 00:29:41.231 clat percentiles (usec): 00:29:41.231 | 1.00th=[27395], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:41.231 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:41.231 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:29:41.231 | 99.00th=[29230], 99.50th=[30016], 99.90th=[58983], 99.95th=[58983], 00:29:41.231 | 99.99th=[58983] 00:29:41.231 bw ( KiB/s): min= 2048, max= 2304, per=4.12%, avg=2270.32, stdev=71.93, samples=19 00:29:41.231 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:29:41.231 lat (msec) : 10=0.28%, 50=99.44%, 100=0.28% 00:29:41.231 cpu : usr=98.76%, sys=0.85%, ctx=11, majf=0, minf=19 00:29:41.231 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:41.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.231 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.231 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.231 filename0: (groupid=0, jobs=1): err= 0: pid=485746: Fri Jul 12 19:21:42 2024 00:29:41.231 read: IOPS=601, BW=2405KiB/s (2463kB/s)(23.5MiB/10024msec) 00:29:41.231 slat (nsec): min=6830, max=44722, avg=16196.56, stdev=5918.40 00:29:41.231 clat (usec): min=6295, max=43203, avg=26472.80, stdev=3948.18 00:29:41.231 lat (usec): min=6305, max=43234, avg=26489.00, stdev=3950.81 00:29:41.231 clat percentiles (usec): 00:29:41.231 | 1.00th=[15270], 5.00th=[16057], 10.00th=[17433], 20.00th=[27395], 00:29:41.231 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:41.231 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:41.231 | 99.00th=[29754], 99.50th=[30016], 99.90th=[43254], 99.95th=[43254], 00:29:41.231 | 99.99th=[43254] 00:29:41.231 bw ( KiB/s): min= 2176, max= 3232, per=4.36%, avg=2404.80, stdev=307.38, samples=20 00:29:41.231 iops : min= 544, max= 808, avg=601.20, stdev=76.84, samples=20 00:29:41.231 lat (msec) : 10=0.61%, 20=9.65%, 50=89.73% 00:29:41.231 cpu : usr=98.78%, sys=0.84%, ctx=11, majf=0, minf=22 00:29:41.231 IO depths : 1=4.8%, 2=9.9%, 4=21.4%, 8=56.2%, 16=7.8%, 32=0.0%, >=64=0.0% 00:29:41.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.231 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.231 issued rwts: total=6028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.231 filename1: (groupid=0, jobs=1): err= 0: pid=485747: Fri Jul 12 19:21:42 2024 00:29:41.231 read: IOPS=569, BW=2279KiB/s (2334kB/s)(22.3MiB/10026msec) 00:29:41.231 slat (nsec): min=7349, max=92849, avg=24979.85, stdev=12351.66 00:29:41.231 clat (usec): min=16745, max=43329, avg=27887.19, stdev=1083.80 00:29:41.231 lat (usec): min=16760, max=43349, avg=27912.17, stdev=1082.56 00:29:41.231 clat percentiles (usec): 00:29:41.231 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:29:41.231 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:41.231 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:29:41.231 | 99.00th=[28967], 99.50th=[30016], 99.90th=[43254], 99.95th=[43254], 00:29:41.231 | 99.99th=[43254] 00:29:41.231 bw ( KiB/s): min= 2176, max= 2304, per=4.13%, avg=2278.40, stdev=52.53, samples=20 00:29:41.231 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:29:41.231 lat (msec) : 20=0.28%, 50=99.72% 00:29:41.231 cpu : usr=98.72%, sys=0.90%, ctx=17, majf=0, minf=26 00:29:41.231 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:41.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.231 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.231 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.231 filename1: (groupid=0, jobs=1): err= 0: pid=485748: Fri Jul 12 19:21:42 2024 00:29:41.231 read: IOPS=570, BW=2284KiB/s (2338kB/s)(22.4MiB/10026msec) 00:29:41.231 slat (usec): min=6, max=166, avg=41.66, stdev=15.25 00:29:41.231 clat (usec): min=16588, max=52098, avg=27650.89, stdev=1612.13 00:29:41.231 lat (usec): min=16595, max=52117, avg=27692.55, stdev=1613.13 00:29:41.231 clat percentiles (usec): 00:29:41.231 | 1.00th=[19792], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:41.231 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:41.231 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:41.231 | 99.00th=[29754], 99.50th=[35914], 99.90th=[43254], 99.95th=[43254], 00:29:41.231 | 99.99th=[52167] 00:29:41.231 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2283.20, stdev=46.75, samples=20 00:29:41.231 iops : min= 544, max= 576, avg=570.80, stdev=11.69, samples=20 00:29:41.231 lat (msec) : 20=1.12%, 50=98.85%, 100=0.03% 00:29:41.231 cpu : usr=98.76%, sys=0.67%, ctx=108, majf=0, minf=21 00:29:41.231 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:29:41.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.231 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.231 issued rwts: total=5724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.231 filename1: (groupid=0, jobs=1): err= 0: pid=485749: Fri Jul 12 19:21:42 2024 00:29:41.231 read: IOPS=571, BW=2285KiB/s (2340kB/s)(22.3MiB/10011msec) 00:29:41.231 slat (nsec): min=6518, max=90942, avg=44584.05, stdev=17629.42 00:29:41.231 clat (usec): min=14705, max=77598, avg=27612.01, stdev=2769.71 00:29:41.231 lat (usec): min=14713, max=77616, avg=27656.59, stdev=2770.35 00:29:41.231 clat percentiles (usec): 00:29:41.231 | 1.00th=[18220], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:29:41.231 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:29:41.231 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:41.231 | 99.00th=[29754], 99.50th=[35914], 99.90th=[69731], 99.95th=[69731], 00:29:41.231 | 99.99th=[78119] 00:29:41.231 bw ( KiB/s): min= 2176, max= 2368, per=4.14%, avg=2281.60, stdev=56.01, samples=20 00:29:41.231 iops : min= 544, max= 592, avg=570.40, stdev=14.00, samples=20 00:29:41.231 lat (msec) : 20=1.29%, 50=98.43%, 100=0.28% 00:29:41.231 cpu : usr=98.96%, sys=0.66%, ctx=13, majf=0, minf=27 00:29:41.231 IO depths : 1=6.0%, 2=12.0%, 4=24.2%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:41.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.231 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.232 issued rwts: total=5720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.232 filename1: (groupid=0, jobs=1): err= 0: pid=485750: Fri Jul 12 19:21:42 2024 00:29:41.232 read: IOPS=567, BW=2270KiB/s (2324kB/s)(22.2MiB/10021msec) 00:29:41.232 slat (nsec): min=7040, max=91554, avg=41031.43, stdev=18146.62 00:29:41.232 clat (usec): min=16308, max=79539, avg=27880.99, stdev=2831.42 00:29:41.232 lat (usec): min=16316, max=79564, avg=27922.03, stdev=2829.39 00:29:41.232 clat percentiles (usec): 00:29:41.232 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:41.232 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:41.232 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:29:41.232 | 99.00th=[29492], 99.50th=[30016], 99.90th=[79168], 99.95th=[79168], 00:29:41.232 | 99.99th=[79168] 00:29:41.232 bw ( KiB/s): min= 1920, max= 2352, per=4.11%, avg=2268.00, stdev=95.42, samples=20 00:29:41.232 iops : min= 480, max= 588, avg=567.00, stdev=23.85, samples=20 00:29:41.232 lat (msec) : 20=0.23%, 50=99.49%, 100=0.28% 00:29:41.232 cpu : usr=99.03%, sys=0.61%, ctx=11, majf=0, minf=26 00:29:41.232 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:41.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.232 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.232 issued rwts: total=5686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.232 filename1: (groupid=0, jobs=1): err= 0: pid=485751: Fri Jul 12 19:21:42 2024 00:29:41.232 read: IOPS=569, BW=2279KiB/s (2334kB/s)(22.3MiB/10025msec) 00:29:41.232 slat (nsec): min=6894, max=49476, avg=17083.69, stdev=6639.53 00:29:41.232 clat (usec): min=16793, max=44167, avg=27939.11, stdev=1061.23 00:29:41.232 lat (usec): min=16821, max=44193, avg=27956.19, stdev=1060.86 00:29:41.232 clat percentiles (usec): 00:29:41.232 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:41.232 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:41.232 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:29:41.232 | 99.00th=[29230], 99.50th=[30016], 99.90th=[43254], 99.95th=[43254], 00:29:41.232 | 99.99th=[44303] 00:29:41.232 bw ( KiB/s): min= 2176, max= 2304, per=4.13%, avg=2278.40, stdev=52.53, samples=20 00:29:41.232 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:29:41.232 lat (msec) : 20=0.28%, 50=99.72% 00:29:41.232 cpu : usr=98.74%, sys=0.88%, ctx=4, majf=0, minf=18 00:29:41.232 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:41.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.232 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.232 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.232 filename1: (groupid=0, jobs=1): err= 0: pid=485752: Fri Jul 12 19:21:42 2024 00:29:41.232 read: IOPS=569, BW=2279KiB/s (2334kB/s)(22.3MiB/10026msec) 00:29:41.232 slat (nsec): min=7255, max=97154, avg=35687.74, stdev=17297.87 00:29:41.232 clat (usec): min=16751, max=43388, avg=27822.72, stdev=1115.46 00:29:41.232 lat (usec): min=16772, max=43403, avg=27858.41, stdev=1112.91 00:29:41.232 clat percentiles (usec): 00:29:41.232 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:29:41.232 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:41.232 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:29:41.232 | 99.00th=[28967], 99.50th=[30016], 99.90th=[43254], 99.95th=[43254], 00:29:41.232 | 99.99th=[43254] 00:29:41.232 bw ( KiB/s): min= 2176, max= 2304, per=4.13%, avg=2278.40, stdev=52.53, samples=20 00:29:41.232 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:29:41.232 lat (msec) : 20=0.35%, 50=99.65% 00:29:41.232 cpu : usr=98.82%, sys=0.79%, ctx=19, majf=0, minf=26 00:29:41.232 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:41.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.232 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.232 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.232 filename1: (groupid=0, jobs=1): err= 0: pid=485753: Fri Jul 12 19:21:42 2024 00:29:41.232 read: IOPS=568, BW=2273KiB/s (2328kB/s)(22.2MiB/10016msec) 00:29:41.232 slat (nsec): min=6937, max=91548, avg=42427.38, stdev=18124.83 00:29:41.232 clat (usec): min=14487, max=77600, avg=27823.58, stdev=2969.23 00:29:41.232 lat (usec): min=14495, max=77625, avg=27866.01, stdev=2969.14 00:29:41.232 clat percentiles (usec): 00:29:41.232 | 1.00th=[20841], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:41.232 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:41.232 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:29:41.232 | 99.00th=[33817], 99.50th=[35914], 99.90th=[73925], 99.95th=[77071], 00:29:41.232 | 99.99th=[78119] 00:29:41.232 bw ( KiB/s): min= 2016, max= 2304, per=4.12%, avg=2270.40, stdev=75.92, samples=20 00:29:41.232 iops : min= 504, max= 576, avg=567.60, stdev=18.98, samples=20 00:29:41.232 lat (msec) : 20=0.56%, 50=99.16%, 100=0.28% 00:29:41.232 cpu : usr=98.88%, sys=0.74%, ctx=13, majf=0, minf=20 00:29:41.232 IO depths : 1=4.8%, 2=10.9%, 4=24.4%, 8=52.1%, 16=7.7%, 32=0.0%, >=64=0.0% 00:29:41.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.232 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.232 issued rwts: total=5692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.232 filename1: (groupid=0, jobs=1): err= 0: pid=485754: Fri Jul 12 19:21:42 2024 00:29:41.232 read: IOPS=567, BW=2272KiB/s (2326kB/s)(22.2MiB/10002msec) 00:29:41.232 slat (nsec): min=7040, max=38730, avg=16126.98, stdev=4052.27 00:29:41.232 clat (usec): min=26500, max=62432, avg=28027.42, stdev=1864.17 00:29:41.232 lat (usec): min=26515, max=62452, avg=28043.54, stdev=1863.78 00:29:41.232 clat percentiles (usec): 00:29:41.232 | 1.00th=[27395], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:41.232 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:41.232 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:29:41.232 | 99.00th=[29230], 99.50th=[30016], 99.90th=[62129], 99.95th=[62653], 00:29:41.232 | 99.99th=[62653] 00:29:41.232 bw ( KiB/s): min= 2052, max= 2304, per=4.12%, avg=2270.53, stdev=71.25, samples=19 00:29:41.232 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:29:41.232 lat (msec) : 50=99.72%, 100=0.28% 00:29:41.232 cpu : usr=98.73%, sys=0.90%, ctx=11, majf=0, minf=21 00:29:41.232 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:41.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.232 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.232 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.232 filename2: (groupid=0, jobs=1): err= 0: pid=485755: Fri Jul 12 19:21:42 2024 00:29:41.233 read: IOPS=569, BW=2279KiB/s (2334kB/s)(22.3MiB/10024msec) 00:29:41.233 slat (nsec): min=7218, max=56043, avg=17597.79, stdev=6210.35 00:29:41.233 clat (usec): min=16661, max=43156, avg=27931.83, stdev=1169.04 00:29:41.233 lat (usec): min=16691, max=43195, avg=27949.43, stdev=1168.74 00:29:41.233 clat percentiles (usec): 00:29:41.233 | 1.00th=[25822], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:41.233 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:41.233 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:29:41.233 | 99.00th=[30016], 99.50th=[31589], 99.90th=[43254], 99.95th=[43254], 00:29:41.233 | 99.99th=[43254] 00:29:41.233 bw ( KiB/s): min= 2176, max= 2304, per=4.13%, avg=2278.40, stdev=52.53, samples=20 00:29:41.233 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:29:41.233 lat (msec) : 20=0.28%, 50=99.72% 00:29:41.233 cpu : usr=98.70%, sys=0.92%, ctx=18, majf=0, minf=19 00:29:41.233 IO depths : 1=4.9%, 2=11.0%, 4=24.5%, 8=52.0%, 16=7.6%, 32=0.0%, >=64=0.0% 00:29:41.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.233 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.233 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.233 filename2: (groupid=0, jobs=1): err= 0: pid=485756: Fri Jul 12 19:21:42 2024 00:29:41.233 read: IOPS=569, BW=2279KiB/s (2334kB/s)(22.3MiB/10025msec) 00:29:41.233 slat (nsec): min=6986, max=67033, avg=17319.15, stdev=6748.92 00:29:41.233 clat (usec): min=16742, max=43283, avg=27936.47, stdev=1060.02 00:29:41.233 lat (usec): min=16767, max=43303, avg=27953.79, stdev=1059.59 00:29:41.233 clat percentiles (usec): 00:29:41.233 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:41.233 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:41.233 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:29:41.233 | 99.00th=[28967], 99.50th=[30016], 99.90th=[43254], 99.95th=[43254], 00:29:41.233 | 99.99th=[43254] 00:29:41.233 bw ( KiB/s): min= 2176, max= 2304, per=4.13%, avg=2278.40, stdev=52.53, samples=20 00:29:41.233 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:29:41.233 lat (msec) : 20=0.28%, 50=99.72% 00:29:41.233 cpu : usr=98.77%, sys=0.87%, ctx=13, majf=0, minf=18 00:29:41.233 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:41.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.233 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.233 filename2: (groupid=0, jobs=1): err= 0: pid=485757: Fri Jul 12 19:21:42 2024 00:29:41.233 read: IOPS=604, BW=2417KiB/s (2475kB/s)(23.7MiB/10025msec) 00:29:41.233 slat (nsec): min=6841, max=51859, avg=14074.55, stdev=5904.66 00:29:41.233 clat (usec): min=4762, max=43123, avg=26373.90, stdev=4628.76 00:29:41.233 lat (usec): min=4771, max=43157, avg=26387.98, stdev=4630.40 00:29:41.233 clat percentiles (usec): 00:29:41.233 | 1.00th=[ 5932], 5.00th=[15795], 10.00th=[24511], 20.00th=[26608], 00:29:41.233 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:41.233 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:29:41.233 | 99.00th=[30540], 99.50th=[31065], 99.90th=[42730], 99.95th=[43254], 00:29:41.233 | 99.99th=[43254] 00:29:41.233 bw ( KiB/s): min= 2176, max= 3144, per=4.38%, avg=2416.40, stdev=216.56, samples=20 00:29:41.233 iops : min= 544, max= 786, avg=604.10, stdev=54.14, samples=20 00:29:41.233 lat (msec) : 10=2.86%, 20=6.01%, 50=91.13% 00:29:41.233 cpu : usr=98.65%, sys=0.97%, ctx=12, majf=0, minf=27 00:29:41.233 IO depths : 1=2.9%, 2=7.5%, 4=19.2%, 8=60.6%, 16=9.8%, 32=0.0%, >=64=0.0% 00:29:41.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.233 complete : 0=0.0%, 4=92.6%, 8=1.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.233 issued rwts: total=6057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.233 filename2: (groupid=0, jobs=1): err= 0: pid=485758: Fri Jul 12 19:21:42 2024 00:29:41.233 read: IOPS=569, BW=2279KiB/s (2334kB/s)(22.3MiB/10026msec) 00:29:41.233 slat (nsec): min=9176, max=92999, avg=29558.68, stdev=15083.46 00:29:41.233 clat (usec): min=16737, max=43438, avg=27861.69, stdev=1078.06 00:29:41.233 lat (usec): min=16752, max=43451, avg=27891.25, stdev=1076.32 00:29:41.233 clat percentiles (usec): 00:29:41.233 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:29:41.233 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:41.233 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:29:41.233 | 99.00th=[28967], 99.50th=[29754], 99.90th=[43254], 99.95th=[43254], 00:29:41.233 | 99.99th=[43254] 00:29:41.233 bw ( KiB/s): min= 2176, max= 2304, per=4.13%, avg=2278.40, stdev=52.53, samples=20 00:29:41.233 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:29:41.233 lat (msec) : 20=0.28%, 50=99.72% 00:29:41.233 cpu : usr=98.82%, sys=0.80%, ctx=11, majf=0, minf=18 00:29:41.233 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:41.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.233 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.233 filename2: (groupid=0, jobs=1): err= 0: pid=485759: Fri Jul 12 19:21:42 2024 00:29:41.233 read: IOPS=569, BW=2279KiB/s (2334kB/s)(22.3MiB/10026msec) 00:29:41.233 slat (nsec): min=8406, max=92333, avg=33639.58, stdev=17021.75 00:29:41.233 clat (usec): min=16731, max=43400, avg=27835.81, stdev=1076.64 00:29:41.233 lat (usec): min=16746, max=43417, avg=27869.45, stdev=1074.53 00:29:41.233 clat percentiles (usec): 00:29:41.233 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:29:41.233 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:41.233 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:29:41.233 | 99.00th=[28967], 99.50th=[29754], 99.90th=[43254], 99.95th=[43254], 00:29:41.233 | 99.99th=[43254] 00:29:41.233 bw ( KiB/s): min= 2176, max= 2304, per=4.13%, avg=2278.40, stdev=52.53, samples=20 00:29:41.233 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:29:41.233 lat (msec) : 20=0.28%, 50=99.72% 00:29:41.233 cpu : usr=98.69%, sys=0.93%, ctx=12, majf=0, minf=22 00:29:41.233 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:41.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.233 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.233 filename2: (groupid=0, jobs=1): err= 0: pid=485760: Fri Jul 12 19:21:42 2024 00:29:41.234 read: IOPS=573, BW=2292KiB/s (2347kB/s)(22.4MiB/10026msec) 00:29:41.234 slat (nsec): min=6893, max=90701, avg=40378.72, stdev=18463.77 00:29:41.234 clat (usec): min=14798, max=52663, avg=27612.40, stdev=2075.93 00:29:41.234 lat (usec): min=14805, max=52679, avg=27652.77, stdev=2077.30 00:29:41.234 clat percentiles (usec): 00:29:41.234 | 1.00th=[16909], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:41.234 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:41.234 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:41.234 | 99.00th=[30016], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:29:41.234 | 99.99th=[52691] 00:29:41.234 bw ( KiB/s): min= 2176, max= 2448, per=4.16%, avg=2292.00, stdev=59.39, samples=20 00:29:41.234 iops : min= 544, max= 612, avg=573.00, stdev=14.85, samples=20 00:29:41.234 lat (msec) : 20=1.78%, 50=98.19%, 100=0.03% 00:29:41.234 cpu : usr=98.72%, sys=0.89%, ctx=12, majf=0, minf=24 00:29:41.234 IO depths : 1=5.9%, 2=11.9%, 4=24.3%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:41.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.234 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.234 issued rwts: total=5746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.234 filename2: (groupid=0, jobs=1): err= 0: pid=485761: Fri Jul 12 19:21:42 2024 00:29:41.234 read: IOPS=567, BW=2272KiB/s (2326kB/s)(22.2MiB/10001msec) 00:29:41.234 slat (nsec): min=7848, max=92546, avg=42997.44, stdev=18276.17 00:29:41.234 clat (usec): min=25910, max=58981, avg=27813.40, stdev=1719.20 00:29:41.234 lat (usec): min=25940, max=59001, avg=27856.40, stdev=1716.52 00:29:41.234 clat percentiles (usec): 00:29:41.234 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:41.234 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:41.234 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:41.234 | 99.00th=[29492], 99.50th=[30016], 99.90th=[58983], 99.95th=[58983], 00:29:41.234 | 99.99th=[58983] 00:29:41.234 bw ( KiB/s): min= 2048, max= 2304, per=4.12%, avg=2270.32, stdev=71.93, samples=19 00:29:41.234 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:29:41.234 lat (msec) : 50=99.72%, 100=0.28% 00:29:41.234 cpu : usr=98.97%, sys=0.65%, ctx=8, majf=0, minf=24 00:29:41.234 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:41.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.234 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.234 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.234 filename2: (groupid=0, jobs=1): err= 0: pid=485762: Fri Jul 12 19:21:42 2024 00:29:41.234 read: IOPS=569, BW=2278KiB/s (2333kB/s)(22.3MiB/10004msec) 00:29:41.234 slat (nsec): min=7037, max=97580, avg=43487.71, stdev=18078.68 00:29:41.234 clat (usec): min=3842, max=58928, avg=27717.80, stdev=2091.57 00:29:41.234 lat (usec): min=3849, max=58950, avg=27761.29, stdev=2090.71 00:29:41.234 clat percentiles (usec): 00:29:41.234 | 1.00th=[26084], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:29:41.234 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:29:41.234 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:41.234 | 99.00th=[29492], 99.50th=[30278], 99.90th=[58983], 99.95th=[58983], 00:29:41.234 | 99.99th=[58983] 00:29:41.234 bw ( KiB/s): min= 2048, max= 2352, per=4.12%, avg=2272.84, stdev=73.99, samples=19 00:29:41.234 iops : min= 512, max= 588, avg=568.21, stdev=18.50, samples=19 00:29:41.234 lat (msec) : 4=0.19%, 20=0.21%, 50=99.32%, 100=0.28% 00:29:41.234 cpu : usr=98.98%, sys=0.64%, ctx=16, majf=0, minf=17 00:29:41.234 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:29:41.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.234 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.234 issued rwts: total=5697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:41.234 00:29:41.234 Run status group 0 (all jobs): 00:29:41.234 READ: bw=53.9MiB/s (56.5MB/s), 2268KiB/s-2453KiB/s (2322kB/s-2511kB/s), io=540MiB (566MB), run=10001-10026msec 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.234 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:41.235 bdev_null0 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:41.235 [2024-07-12 19:21:42.574346] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:41.235 bdev_null1 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:41.235 { 00:29:41.235 "params": { 00:29:41.235 "name": "Nvme$subsystem", 00:29:41.235 "trtype": "$TEST_TRANSPORT", 00:29:41.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.235 "adrfam": "ipv4", 00:29:41.235 "trsvcid": "$NVMF_PORT", 00:29:41.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.235 "hdgst": ${hdgst:-false}, 00:29:41.235 "ddgst": ${ddgst:-false} 00:29:41.235 }, 00:29:41.235 "method": "bdev_nvme_attach_controller" 00:29:41.235 } 00:29:41.235 EOF 00:29:41.235 )") 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:41.235 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:41.236 { 00:29:41.236 "params": { 00:29:41.236 "name": "Nvme$subsystem", 00:29:41.236 "trtype": "$TEST_TRANSPORT", 00:29:41.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:41.236 "adrfam": "ipv4", 00:29:41.236 "trsvcid": "$NVMF_PORT", 00:29:41.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:41.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:41.236 "hdgst": ${hdgst:-false}, 00:29:41.236 "ddgst": ${ddgst:-false} 00:29:41.236 }, 00:29:41.236 "method": "bdev_nvme_attach_controller" 00:29:41.236 } 00:29:41.236 EOF 00:29:41.236 )") 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:41.236 "params": { 00:29:41.236 "name": "Nvme0", 00:29:41.236 "trtype": "tcp", 00:29:41.236 "traddr": "10.0.0.2", 00:29:41.236 "adrfam": "ipv4", 00:29:41.236 "trsvcid": "4420", 00:29:41.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:41.236 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:41.236 "hdgst": false, 00:29:41.236 "ddgst": false 00:29:41.236 }, 00:29:41.236 "method": "bdev_nvme_attach_controller" 00:29:41.236 },{ 00:29:41.236 "params": { 00:29:41.236 "name": "Nvme1", 00:29:41.236 "trtype": "tcp", 00:29:41.236 "traddr": "10.0.0.2", 00:29:41.236 "adrfam": "ipv4", 00:29:41.236 "trsvcid": "4420", 00:29:41.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:41.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:41.236 "hdgst": false, 00:29:41.236 "ddgst": false 00:29:41.236 }, 00:29:41.236 "method": "bdev_nvme_attach_controller" 00:29:41.236 }' 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:41.236 19:21:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:41.236 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:41.236 ... 00:29:41.236 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:41.236 ... 00:29:41.236 fio-3.35 00:29:41.236 Starting 4 threads 00:29:41.236 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.505 00:29:46.505 filename0: (groupid=0, jobs=1): err= 0: pid=487709: Fri Jul 12 19:21:48 2024 00:29:46.505 read: IOPS=2687, BW=21.0MiB/s (22.0MB/s)(105MiB/5003msec) 00:29:46.505 slat (nsec): min=6189, max=57861, avg=11734.77, stdev=6120.56 00:29:46.505 clat (usec): min=765, max=43818, avg=2940.32, stdev=1097.05 00:29:46.505 lat (usec): min=777, max=43850, avg=2952.06, stdev=1097.27 00:29:46.505 clat percentiles (usec): 00:29:46.505 | 1.00th=[ 1860], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2573], 00:29:46.505 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 2999], 00:29:46.505 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 3720], 00:29:46.505 | 99.00th=[ 4555], 99.50th=[ 4883], 99.90th=[ 5473], 99.95th=[43779], 00:29:46.505 | 99.99th=[43779] 00:29:46.505 bw ( KiB/s): min=20144, max=22448, per=25.53%, avg=21500.80, stdev=836.56, samples=10 00:29:46.505 iops : min= 2518, max= 2806, avg=2687.60, stdev=104.57, samples=10 00:29:46.505 lat (usec) : 1000=0.07% 00:29:46.505 lat (msec) : 2=1.48%, 4=95.83%, 10=2.56%, 50=0.06% 00:29:46.505 cpu : usr=94.60%, sys=3.64%, ctx=420, majf=0, minf=9 00:29:46.505 IO depths : 1=0.5%, 2=8.7%, 4=62.4%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:46.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 issued rwts: total=13446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.505 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:46.505 filename0: (groupid=0, jobs=1): err= 0: pid=487710: Fri Jul 12 19:21:48 2024 00:29:46.505 read: IOPS=2452, BW=19.2MiB/s (20.1MB/s)(95.9MiB/5005msec) 00:29:46.505 slat (nsec): min=6133, max=55843, avg=10298.21, stdev=4741.06 00:29:46.505 clat (usec): min=801, max=5850, avg=3230.57, stdev=577.03 00:29:46.505 lat (usec): min=808, max=5856, avg=3240.87, stdev=576.32 00:29:46.505 clat percentiles (usec): 00:29:46.505 | 1.00th=[ 2057], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2900], 00:29:46.505 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3097], 60.00th=[ 3163], 00:29:46.505 | 70.00th=[ 3294], 80.00th=[ 3556], 90.00th=[ 4015], 95.00th=[ 4424], 00:29:46.505 | 99.00th=[ 5145], 99.50th=[ 5407], 99.90th=[ 5604], 99.95th=[ 5669], 00:29:46.505 | 99.99th=[ 5866] 00:29:46.505 bw ( KiB/s): min=18512, max=20576, per=23.26%, avg=19590.11, stdev=740.67, samples=9 00:29:46.505 iops : min= 2314, max= 2572, avg=2448.67, stdev=92.56, samples=9 00:29:46.505 lat (usec) : 1000=0.02% 00:29:46.505 lat (msec) : 2=0.82%, 4=89.20%, 10=9.95% 00:29:46.505 cpu : usr=97.02%, sys=2.62%, ctx=6, majf=0, minf=9 00:29:46.505 IO depths : 1=0.3%, 2=3.3%, 4=68.5%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:46.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 issued rwts: total=12277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.505 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:46.505 filename1: (groupid=0, jobs=1): err= 0: pid=487711: Fri Jul 12 19:21:48 2024 00:29:46.505 read: IOPS=2577, BW=20.1MiB/s (21.1MB/s)(101MiB/5005msec) 00:29:46.505 slat (nsec): min=6171, max=55831, avg=10712.31, stdev=4735.89 00:29:46.505 clat (usec): min=669, max=5793, avg=3070.40, stdev=530.23 00:29:46.505 lat (usec): min=679, max=5799, avg=3081.11, stdev=529.93 00:29:46.505 clat percentiles (usec): 00:29:46.505 | 1.00th=[ 1909], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2737], 00:29:46.505 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:29:46.505 | 70.00th=[ 3163], 80.00th=[ 3294], 90.00th=[ 3687], 95.00th=[ 4146], 00:29:46.505 | 99.00th=[ 5014], 99.50th=[ 5211], 99.90th=[ 5407], 99.95th=[ 5407], 00:29:46.505 | 99.99th=[ 5800] 00:29:46.505 bw ( KiB/s): min=19824, max=21760, per=24.61%, avg=20723.56, stdev=707.44, samples=9 00:29:46.505 iops : min= 2478, max= 2720, avg=2590.44, stdev=88.43, samples=9 00:29:46.505 lat (usec) : 750=0.02%, 1000=0.02% 00:29:46.505 lat (msec) : 2=1.18%, 4=92.62%, 10=6.16% 00:29:46.505 cpu : usr=96.94%, sys=2.72%, ctx=13, majf=0, minf=9 00:29:46.505 IO depths : 1=0.3%, 2=8.5%, 4=62.8%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:46.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 issued rwts: total=12899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.505 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:46.505 filename1: (groupid=0, jobs=1): err= 0: pid=487712: Fri Jul 12 19:21:48 2024 00:29:46.505 read: IOPS=2811, BW=22.0MiB/s (23.0MB/s)(110MiB/5001msec) 00:29:46.505 slat (nsec): min=6129, max=46922, avg=11222.78, stdev=4754.91 00:29:46.505 clat (usec): min=689, max=5562, avg=2808.43, stdev=440.00 00:29:46.505 lat (usec): min=701, max=5574, avg=2819.65, stdev=440.39 00:29:46.505 clat percentiles (usec): 00:29:46.505 | 1.00th=[ 1696], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2474], 00:29:46.505 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 2933], 00:29:46.505 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3458], 00:29:46.505 | 99.00th=[ 4228], 99.50th=[ 4621], 99.90th=[ 5211], 99.95th=[ 5276], 00:29:46.505 | 99.99th=[ 5407] 00:29:46.505 bw ( KiB/s): min=21136, max=23824, per=26.72%, avg=22503.11, stdev=886.24, samples=9 00:29:46.505 iops : min= 2642, max= 2978, avg=2812.89, stdev=110.78, samples=9 00:29:46.505 lat (usec) : 750=0.01%, 1000=0.07% 00:29:46.505 lat (msec) : 2=2.46%, 4=95.93%, 10=1.53% 00:29:46.505 cpu : usr=96.04%, sys=2.90%, ctx=124, majf=0, minf=0 00:29:46.505 IO depths : 1=0.3%, 2=14.4%, 4=57.2%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:46.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:46.505 issued rwts: total=14058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:46.505 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:46.505 00:29:46.505 Run status group 0 (all jobs): 00:29:46.505 READ: bw=82.2MiB/s (86.2MB/s), 19.2MiB/s-22.0MiB/s (20.1MB/s-23.0MB/s), io=412MiB (432MB), run=5001-5005msec 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.505 00:29:46.505 real 0m24.384s 00:29:46.505 user 4m52.998s 00:29:46.505 sys 0m4.260s 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:46.505 19:21:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:46.505 ************************************ 00:29:46.505 END TEST fio_dif_rand_params 00:29:46.505 ************************************ 00:29:46.505 19:21:48 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:46.505 19:21:48 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:46.505 19:21:48 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:46.505 19:21:48 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.505 19:21:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:46.505 ************************************ 00:29:46.505 START TEST fio_dif_digest 00:29:46.505 ************************************ 00:29:46.505 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:29:46.505 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:46.505 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:46.505 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:46.506 bdev_null0 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:46.506 [2024-07-12 19:21:49.053651] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:46.506 { 00:29:46.506 "params": { 00:29:46.506 "name": "Nvme$subsystem", 00:29:46.506 "trtype": "$TEST_TRANSPORT", 00:29:46.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.506 "adrfam": "ipv4", 00:29:46.506 "trsvcid": "$NVMF_PORT", 00:29:46.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.506 "hdgst": ${hdgst:-false}, 00:29:46.506 "ddgst": ${ddgst:-false} 00:29:46.506 }, 00:29:46.506 "method": "bdev_nvme_attach_controller" 00:29:46.506 } 00:29:46.506 EOF 00:29:46.506 )") 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:46.506 19:21:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:46.794 19:21:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:46.794 "params": { 00:29:46.794 "name": "Nvme0", 00:29:46.794 "trtype": "tcp", 00:29:46.794 "traddr": "10.0.0.2", 00:29:46.794 "adrfam": "ipv4", 00:29:46.794 "trsvcid": "4420", 00:29:46.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:46.794 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:46.794 "hdgst": true, 00:29:46.794 "ddgst": true 00:29:46.794 }, 00:29:46.794 "method": "bdev_nvme_attach_controller" 00:29:46.794 }' 00:29:46.794 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:46.794 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:46.794 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:46.794 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:46.794 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:46.794 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:46.794 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:46.794 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:46.794 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:46.794 19:21:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:47.059 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:47.059 ... 00:29:47.059 fio-3.35 00:29:47.059 Starting 3 threads 00:29:47.059 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.267 00:29:59.267 filename0: (groupid=0, jobs=1): err= 0: pid=488801: Fri Jul 12 19:21:59 2024 00:29:59.267 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(371MiB/10047msec) 00:29:59.267 slat (nsec): min=6550, max=41661, avg=11599.53, stdev=2025.62 00:29:59.267 clat (usec): min=7872, max=49066, avg=10123.84, stdev=1217.86 00:29:59.267 lat (usec): min=7881, max=49078, avg=10135.44, stdev=1217.81 00:29:59.267 clat percentiles (usec): 00:29:59.267 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:29:59.267 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:29:59.267 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:29:59.267 | 99.00th=[11731], 99.50th=[11863], 99.90th=[12780], 99.95th=[48497], 00:29:59.267 | 99.99th=[49021] 00:29:59.267 bw ( KiB/s): min=36608, max=38912, per=35.71%, avg=37977.60, stdev=577.08, samples=20 00:29:59.267 iops : min= 286, max= 304, avg=296.70, stdev= 4.51, samples=20 00:29:59.267 lat (msec) : 10=43.99%, 20=55.94%, 50=0.07% 00:29:59.267 cpu : usr=95.80%, sys=3.88%, ctx=22, majf=0, minf=155 00:29:59.267 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:59.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.267 issued rwts: total=2969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.267 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:59.267 filename0: (groupid=0, jobs=1): err= 0: pid=488802: Fri Jul 12 19:21:59 2024 00:29:59.267 read: IOPS=271, BW=33.9MiB/s (35.6MB/s)(341MiB/10046msec) 00:29:59.267 slat (usec): min=6, max=277, avg=13.44, stdev= 5.93 00:29:59.267 clat (usec): min=8316, max=50479, avg=11027.97, stdev=1270.92 00:29:59.267 lat (usec): min=8330, max=50491, avg=11041.42, stdev=1270.92 00:29:59.267 clat percentiles (usec): 00:29:59.267 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:29:59.267 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:29:59.267 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:29:59.267 | 99.00th=[12780], 99.50th=[13042], 99.90th=[14353], 99.95th=[46924], 00:29:59.267 | 99.99th=[50594] 00:29:59.267 bw ( KiB/s): min=33280, max=35840, per=32.78%, avg=34854.40, stdev=644.83, samples=20 00:29:59.267 iops : min= 260, max= 280, avg=272.30, stdev= 5.04, samples=20 00:29:59.267 lat (msec) : 10=7.96%, 20=91.96%, 50=0.04%, 100=0.04% 00:29:59.267 cpu : usr=94.01%, sys=4.33%, ctx=601, majf=0, minf=137 00:29:59.267 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:59.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.267 issued rwts: total=2725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.267 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:59.267 filename0: (groupid=0, jobs=1): err= 0: pid=488803: Fri Jul 12 19:21:59 2024 00:29:59.267 read: IOPS=264, BW=33.0MiB/s (34.6MB/s)(332MiB/10045msec) 00:29:59.267 slat (nsec): min=6564, max=21223, avg=11896.70, stdev=1739.09 00:29:59.267 clat (usec): min=8707, max=51826, avg=11328.99, stdev=1279.61 00:29:59.267 lat (usec): min=8719, max=51835, avg=11340.89, stdev=1279.60 00:29:59.267 clat percentiles (usec): 00:29:59.267 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:29:59.267 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:29:59.267 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12256], 95.00th=[12518], 00:29:59.267 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13960], 99.95th=[45876], 00:29:59.267 | 99.99th=[51643] 00:29:59.267 bw ( KiB/s): min=33024, max=35328, per=31.91%, avg=33932.80, stdev=640.54, samples=20 00:29:59.267 iops : min= 258, max= 276, avg=265.10, stdev= 5.00, samples=20 00:29:59.267 lat (msec) : 10=4.03%, 20=95.89%, 50=0.04%, 100=0.04% 00:29:59.267 cpu : usr=96.23%, sys=3.44%, ctx=23, majf=0, minf=105 00:29:59.267 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:59.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:59.267 issued rwts: total=2653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:59.267 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:59.267 00:29:59.267 Run status group 0 (all jobs): 00:29:59.267 READ: bw=104MiB/s (109MB/s), 33.0MiB/s-36.9MiB/s (34.6MB/s-38.7MB/s), io=1043MiB (1094MB), run=10045-10047msec 00:29:59.267 19:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:59.267 19:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:59.267 19:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:59.267 19:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:59.268 19:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:59.268 19:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:59.268 19:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.268 19:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:59.268 19:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.268 19:22:00 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:59.268 19:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:59.268 19:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:59.268 19:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:59.268 00:29:59.268 real 0m11.200s 00:29:59.268 user 0m35.690s 00:29:59.268 sys 0m1.496s 00:29:59.268 19:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:59.268 19:22:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:59.268 ************************************ 00:29:59.268 END TEST fio_dif_digest 00:29:59.268 ************************************ 00:29:59.268 19:22:00 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:59.268 19:22:00 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:59.268 19:22:00 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:59.268 19:22:00 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:59.268 19:22:00 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:59.268 19:22:00 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:59.268 19:22:00 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:59.268 19:22:00 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:59.268 19:22:00 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:59.268 rmmod nvme_tcp 00:29:59.268 rmmod nvme_fabrics 00:29:59.268 rmmod nvme_keyring 00:29:59.268 19:22:00 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:59.268 19:22:00 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:59.268 19:22:00 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:59.268 19:22:00 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 479716 ']' 00:29:59.268 19:22:00 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 479716 00:29:59.268 19:22:00 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 479716 ']' 00:29:59.268 19:22:00 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 479716 00:29:59.268 19:22:00 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:29:59.268 19:22:00 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:59.268 19:22:00 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 479716 00:29:59.268 19:22:00 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:59.268 19:22:00 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:59.268 19:22:00 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 479716' 00:29:59.268 killing process with pid 479716 00:29:59.268 19:22:00 nvmf_dif -- common/autotest_common.sh@967 -- # kill 479716 00:29:59.268 19:22:00 nvmf_dif -- common/autotest_common.sh@972 -- # wait 479716 00:29:59.268 19:22:00 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:59.268 19:22:00 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:00.645 Waiting for block devices as requested 00:30:00.645 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:00.903 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:00.903 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:00.903 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:01.162 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:01.162 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:01.162 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:01.422 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:01.422 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:01.422 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:01.682 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:01.682 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:01.682 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:01.682 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:01.941 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:01.941 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:01.941 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:02.200 19:22:04 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:02.200 19:22:04 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:02.200 19:22:04 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:02.200 19:22:04 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:02.200 19:22:04 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.200 19:22:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:02.200 19:22:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.104 19:22:06 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:04.104 00:30:04.104 real 1m14.347s 00:30:04.104 user 7m11.777s 00:30:04.104 sys 0m18.452s 00:30:04.104 19:22:06 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:04.104 19:22:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:04.104 ************************************ 00:30:04.104 END TEST nvmf_dif 00:30:04.104 ************************************ 00:30:04.104 19:22:06 -- common/autotest_common.sh@1142 -- # return 0 00:30:04.104 19:22:06 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:04.104 19:22:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:04.104 19:22:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:04.104 19:22:06 -- common/autotest_common.sh@10 -- # set +x 00:30:04.363 ************************************ 00:30:04.363 START TEST nvmf_abort_qd_sizes 00:30:04.363 ************************************ 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:04.363 * Looking for test storage... 00:30:04.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.363 19:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:30:04.364 19:22:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.641 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:09.900 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:09.900 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:09.900 Found net devices under 0000:86:00.0: cvl_0_0 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.900 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:09.901 Found net devices under 0000:86:00.1: cvl_0_1 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:09.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:30:09.901 00:30:09.901 --- 10.0.0.2 ping statistics --- 00:30:09.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.901 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:30:09.901 00:30:09.901 --- 10.0.0.1 ping statistics --- 00:30:09.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.901 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:09.901 19:22:12 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:13.188 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:13.188 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:13.758 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=496775 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 496775 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 496775 ']' 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:13.758 19:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:14.018 [2024-07-12 19:22:16.326118] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:30:14.018 [2024-07-12 19:22:16.326167] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.018 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.018 [2024-07-12 19:22:16.394597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:14.018 [2024-07-12 19:22:16.496303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.019 [2024-07-12 19:22:16.496349] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.019 [2024-07-12 19:22:16.496359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.019 [2024-07-12 19:22:16.496368] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.019 [2024-07-12 19:22:16.496392] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.019 [2024-07-12 19:22:16.496455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.019 [2024-07-12 19:22:16.496572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.019 [2024-07-12 19:22:16.496678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.019 [2024-07-12 19:22:16.496678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.588 19:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:14.588 19:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:30:14.588 19:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:14.588 19:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:14.588 19:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:30:14.847 19:22:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:30:14.848 19:22:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:30:14.848 19:22:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:30:14.848 19:22:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:14.848 19:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:14.848 19:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:14.848 19:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:14.848 ************************************ 00:30:14.848 START TEST spdk_target_abort 00:30:14.848 ************************************ 00:30:14.848 19:22:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:30:14.848 19:22:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:14.848 19:22:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:30:14.848 19:22:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:14.848 19:22:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:18.140 spdk_targetn1 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:18.140 [2024-07-12 19:22:20.061107] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:18.140 [2024-07-12 19:22:20.094453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:18.140 19:22:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:18.140 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.486 Initializing NVMe Controllers 00:30:21.486 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:21.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:21.486 Initialization complete. Launching workers. 00:30:21.486 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15438, failed: 0 00:30:21.486 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1383, failed to submit 14055 00:30:21.486 success 718, unsuccess 665, failed 0 00:30:21.486 19:22:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:21.486 19:22:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:21.486 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.168 Initializing NVMe Controllers 00:30:24.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:24.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:24.168 Initialization complete. Launching workers. 00:30:24.168 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8465, failed: 0 00:30:24.168 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1255, failed to submit 7210 00:30:24.168 success 315, unsuccess 940, failed 0 00:30:24.168 19:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:24.168 19:22:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:24.168 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.587 Initializing NVMe Controllers 00:30:27.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:27.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:27.587 Initialization complete. Launching workers. 00:30:27.587 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38353, failed: 0 00:30:27.587 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2790, failed to submit 35563 00:30:27.587 success 593, unsuccess 2197, failed 0 00:30:27.587 19:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:27.587 19:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.587 19:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:27.587 19:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.587 19:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:27.587 19:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.587 19:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:28.587 19:22:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.587 19:22:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 496775 00:30:28.587 19:22:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 496775 ']' 00:30:28.587 19:22:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 496775 00:30:28.587 19:22:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:30:28.587 19:22:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:28.587 19:22:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 496775 00:30:28.587 19:22:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:28.587 19:22:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:28.587 19:22:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 496775' 00:30:28.587 killing process with pid 496775 00:30:28.587 19:22:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 496775 00:30:28.587 19:22:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 496775 00:30:28.847 00:30:28.847 real 0m14.022s 00:30:28.847 user 0m55.903s 00:30:28.847 sys 0m2.200s 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:28.847 ************************************ 00:30:28.847 END TEST spdk_target_abort 00:30:28.847 ************************************ 00:30:28.847 19:22:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:28.847 19:22:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:28.847 19:22:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:28.847 19:22:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:28.847 19:22:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:28.847 ************************************ 00:30:28.847 START TEST kernel_target_abort 00:30:28.847 ************************************ 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:28.847 19:22:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:31.409 Waiting for block devices as requested 00:30:31.668 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:31.668 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:31.668 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:31.926 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:31.926 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:31.926 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:32.185 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:32.185 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:32.185 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:32.185 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:32.444 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:32.444 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:32.444 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:32.703 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:32.703 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:32.703 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:32.703 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:32.961 No valid GPT data, bailing 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:30:32.961 00:30:32.961 Discovery Log Number of Records 2, Generation counter 2 00:30:32.961 =====Discovery Log Entry 0====== 00:30:32.961 trtype: tcp 00:30:32.961 adrfam: ipv4 00:30:32.961 subtype: current discovery subsystem 00:30:32.961 treq: not specified, sq flow control disable supported 00:30:32.961 portid: 1 00:30:32.961 trsvcid: 4420 00:30:32.961 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:32.961 traddr: 10.0.0.1 00:30:32.961 eflags: none 00:30:32.961 sectype: none 00:30:32.961 =====Discovery Log Entry 1====== 00:30:32.961 trtype: tcp 00:30:32.961 adrfam: ipv4 00:30:32.961 subtype: nvme subsystem 00:30:32.961 treq: not specified, sq flow control disable supported 00:30:32.961 portid: 1 00:30:32.961 trsvcid: 4420 00:30:32.961 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:32.961 traddr: 10.0.0.1 00:30:32.961 eflags: none 00:30:32.961 sectype: none 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:32.961 19:22:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:33.219 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.509 Initializing NVMe Controllers 00:30:36.509 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:36.509 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:36.509 Initialization complete. Launching workers. 00:30:36.509 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93493, failed: 0 00:30:36.509 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 93493, failed to submit 0 00:30:36.509 success 0, unsuccess 93493, failed 0 00:30:36.509 19:22:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:36.509 19:22:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:36.509 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.796 Initializing NVMe Controllers 00:30:39.796 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:39.796 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:39.796 Initialization complete. Launching workers. 00:30:39.796 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 144832, failed: 0 00:30:39.796 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35862, failed to submit 108970 00:30:39.796 success 0, unsuccess 35862, failed 0 00:30:39.796 19:22:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:39.796 19:22:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:39.796 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.328 Initializing NVMe Controllers 00:30:42.328 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:42.328 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:42.328 Initialization complete. Launching workers. 00:30:42.328 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 139986, failed: 0 00:30:42.328 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35038, failed to submit 104948 00:30:42.328 success 0, unsuccess 35038, failed 0 00:30:42.328 19:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:42.328 19:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:42.328 19:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:42.328 19:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:42.328 19:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:42.328 19:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:42.328 19:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:42.328 19:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:42.328 19:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:42.328 19:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:45.619 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:45.619 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:46.188 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:46.188 00:30:46.188 real 0m17.305s 00:30:46.188 user 0m8.997s 00:30:46.188 sys 0m4.932s 00:30:46.188 19:22:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:46.188 19:22:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:46.188 ************************************ 00:30:46.188 END TEST kernel_target_abort 00:30:46.188 ************************************ 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:46.188 rmmod nvme_tcp 00:30:46.188 rmmod nvme_fabrics 00:30:46.188 rmmod nvme_keyring 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 496775 ']' 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 496775 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 496775 ']' 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 496775 00:30:46.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (496775) - No such process 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 496775 is not found' 00:30:46.188 Process with pid 496775 is not found 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:46.188 19:22:48 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:49.476 Waiting for block devices as requested 00:30:49.476 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:49.476 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:49.476 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:49.476 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:49.476 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:49.476 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:49.476 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:49.476 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:49.476 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:49.735 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:49.735 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:49.735 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:49.993 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:49.993 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:49.993 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:50.252 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:50.252 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:50.252 19:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:50.252 19:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:50.252 19:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:50.252 19:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:50.252 19:22:52 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.252 19:22:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:50.252 19:22:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.784 19:22:54 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:52.784 00:30:52.784 real 0m48.100s 00:30:52.784 user 1m9.223s 00:30:52.784 sys 0m15.531s 00:30:52.784 19:22:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:52.784 19:22:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:52.784 ************************************ 00:30:52.784 END TEST nvmf_abort_qd_sizes 00:30:52.784 ************************************ 00:30:52.784 19:22:54 -- common/autotest_common.sh@1142 -- # return 0 00:30:52.784 19:22:54 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:52.784 19:22:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:52.785 19:22:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:52.785 19:22:54 -- common/autotest_common.sh@10 -- # set +x 00:30:52.785 ************************************ 00:30:52.785 START TEST keyring_file 00:30:52.785 ************************************ 00:30:52.785 19:22:54 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:52.785 * Looking for test storage... 00:30:52.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:52.785 19:22:54 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:52.785 19:22:54 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.785 19:22:54 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.785 19:22:54 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.785 19:22:54 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.785 19:22:54 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.785 19:22:54 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.785 19:22:54 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.785 19:22:54 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:52.785 19:22:54 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:52.785 19:22:54 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:52.785 19:22:54 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:52.785 19:22:54 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:52.785 19:22:54 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:52.785 19:22:54 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:52.785 19:22:54 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:52.785 19:22:54 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:52.785 19:22:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:52.785 19:22:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:52.785 19:22:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:52.785 19:22:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:52.785 19:22:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:52.785 19:22:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ejc2zBJLig 00:30:52.785 19:22:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:52.785 19:22:54 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:52.785 19:22:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ejc2zBJLig 00:30:52.785 19:22:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ejc2zBJLig 00:30:52.785 19:22:55 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ejc2zBJLig 00:30:52.785 19:22:55 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:52.785 19:22:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:52.785 19:22:55 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:52.785 19:22:55 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:52.785 19:22:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:52.785 19:22:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:52.785 19:22:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BF6BEqn4ts 00:30:52.785 19:22:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:52.785 19:22:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:52.785 19:22:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:52.785 19:22:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:52.785 19:22:55 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:52.785 19:22:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:52.785 19:22:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:52.785 19:22:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BF6BEqn4ts 00:30:52.785 19:22:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BF6BEqn4ts 00:30:52.785 19:22:55 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BF6BEqn4ts 00:30:52.785 19:22:55 keyring_file -- keyring/file.sh@30 -- # tgtpid=505565 00:30:52.785 19:22:55 keyring_file -- keyring/file.sh@32 -- # waitforlisten 505565 00:30:52.785 19:22:55 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:52.785 19:22:55 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 505565 ']' 00:30:52.785 19:22:55 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.785 19:22:55 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:52.785 19:22:55 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.785 19:22:55 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:52.785 19:22:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:52.785 [2024-07-12 19:22:55.141347] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:30:52.785 [2024-07-12 19:22:55.141395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid505565 ] 00:30:52.785 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.785 [2024-07-12 19:22:55.206918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.785 [2024-07-12 19:22:55.287163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.719 19:22:55 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:53.719 19:22:55 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:53.719 19:22:55 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:53.719 19:22:55 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.719 19:22:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:53.719 [2024-07-12 19:22:55.942879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.719 null0 00:30:53.719 [2024-07-12 19:22:55.974926] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:53.719 [2024-07-12 19:22:55.975178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:53.719 [2024-07-12 19:22:55.982940] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:53.719 19:22:55 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.719 19:22:55 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:53.719 19:22:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:53.719 19:22:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:53.719 19:22:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:53.719 19:22:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:53.719 19:22:55 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:53.719 19:22:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:53.719 19:22:55 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:53.719 19:22:55 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.719 19:22:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:53.719 [2024-07-12 19:22:55.994972] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:53.719 request: 00:30:53.719 { 00:30:53.719 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.719 "secure_channel": false, 00:30:53.719 "listen_address": { 00:30:53.719 "trtype": "tcp", 00:30:53.719 "traddr": "127.0.0.1", 00:30:53.719 "trsvcid": "4420" 00:30:53.719 }, 00:30:53.719 "method": "nvmf_subsystem_add_listener", 00:30:53.719 "req_id": 1 00:30:53.719 } 00:30:53.719 Got JSON-RPC error response 00:30:53.719 response: 00:30:53.719 { 00:30:53.719 "code": -32602, 00:30:53.719 "message": "Invalid parameters" 00:30:53.719 } 00:30:53.719 19:22:56 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:53.719 19:22:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:53.719 19:22:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:53.719 19:22:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:53.719 19:22:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:53.719 19:22:56 keyring_file -- keyring/file.sh@46 -- # bperfpid=505659 00:30:53.719 19:22:56 keyring_file -- keyring/file.sh@48 -- # waitforlisten 505659 /var/tmp/bperf.sock 00:30:53.719 19:22:56 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:53.719 19:22:56 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 505659 ']' 00:30:53.720 19:22:56 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:53.720 19:22:56 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:53.720 19:22:56 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:53.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:53.720 19:22:56 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:53.720 19:22:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:53.720 [2024-07-12 19:22:56.047753] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:30:53.720 [2024-07-12 19:22:56.047795] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid505659 ] 00:30:53.720 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.720 [2024-07-12 19:22:56.115673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.720 [2024-07-12 19:22:56.195236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.653 19:22:56 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:54.653 19:22:56 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:54.653 19:22:56 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ejc2zBJLig 00:30:54.653 19:22:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ejc2zBJLig 00:30:54.653 19:22:57 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BF6BEqn4ts 00:30:54.653 19:22:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BF6BEqn4ts 00:30:54.911 19:22:57 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:54.911 19:22:57 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:54.911 19:22:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:54.911 19:22:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:54.911 19:22:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:54.911 19:22:57 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ejc2zBJLig == \/\t\m\p\/\t\m\p\.\e\j\c\2\z\B\J\L\i\g ]] 00:30:54.911 19:22:57 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:54.911 19:22:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:54.911 19:22:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:54.911 19:22:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:54.911 19:22:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:55.169 19:22:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.BF6BEqn4ts == \/\t\m\p\/\t\m\p\.\B\F\6\B\E\q\n\4\t\s ]] 00:30:55.169 19:22:57 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:55.169 19:22:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:55.169 19:22:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:55.169 19:22:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:55.169 19:22:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:55.169 19:22:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:55.427 19:22:57 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:55.427 19:22:57 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:55.427 19:22:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:55.427 19:22:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:55.427 19:22:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:55.427 19:22:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:55.427 19:22:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:55.427 19:22:57 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:55.427 19:22:57 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:55.427 19:22:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:55.685 [2024-07-12 19:22:58.132953] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:55.685 nvme0n1 00:30:55.685 19:22:58 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:55.685 19:22:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:55.685 19:22:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:55.685 19:22:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:55.685 19:22:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:55.685 19:22:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:55.943 19:22:58 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:55.943 19:22:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:55.943 19:22:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:55.943 19:22:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:55.943 19:22:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:55.943 19:22:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:55.943 19:22:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:56.201 19:22:58 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:56.201 19:22:58 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:56.201 Running I/O for 1 seconds... 00:30:57.135 00:30:57.135 Latency(us) 00:30:57.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.135 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:57.135 nvme0n1 : 1.00 18469.15 72.15 0.00 0.00 6916.01 3462.01 18236.10 00:30:57.135 =================================================================================================================== 00:30:57.135 Total : 18469.15 72.15 0.00 0.00 6916.01 3462.01 18236.10 00:30:57.135 0 00:30:57.135 19:22:59 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:57.135 19:22:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:57.393 19:22:59 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:57.393 19:22:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:57.393 19:22:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:57.393 19:22:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.393 19:22:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:57.393 19:22:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.651 19:23:00 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:57.651 19:23:00 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:57.651 19:23:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:57.651 19:23:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:57.651 19:23:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:57.651 19:23:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.651 19:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.909 19:23:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:57.909 19:23:00 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:57.909 19:23:00 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:57.909 19:23:00 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:57.909 19:23:00 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:57.909 19:23:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:57.909 19:23:00 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:57.909 19:23:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:57.909 19:23:00 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:57.909 19:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:57.909 [2024-07-12 19:23:00.408054] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:57.909 [2024-07-12 19:23:00.408944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c5770 (107): Transport endpoint is not connected 00:30:57.909 [2024-07-12 19:23:00.409938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c5770 (9): Bad file descriptor 00:30:57.909 [2024-07-12 19:23:00.410940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:57.909 [2024-07-12 19:23:00.410949] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:57.909 [2024-07-12 19:23:00.410955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:57.909 request: 00:30:57.909 { 00:30:57.909 "name": "nvme0", 00:30:57.909 "trtype": "tcp", 00:30:57.909 "traddr": "127.0.0.1", 00:30:57.909 "adrfam": "ipv4", 00:30:57.909 "trsvcid": "4420", 00:30:57.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:57.909 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:57.909 "prchk_reftag": false, 00:30:57.909 "prchk_guard": false, 00:30:57.909 "hdgst": false, 00:30:57.909 "ddgst": false, 00:30:57.909 "psk": "key1", 00:30:57.909 "method": "bdev_nvme_attach_controller", 00:30:57.909 "req_id": 1 00:30:57.909 } 00:30:57.909 Got JSON-RPC error response 00:30:57.909 response: 00:30:57.909 { 00:30:57.909 "code": -5, 00:30:57.909 "message": "Input/output error" 00:30:57.909 } 00:30:57.909 19:23:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:57.909 19:23:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:57.909 19:23:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:57.909 19:23:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:57.909 19:23:00 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:57.909 19:23:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:57.909 19:23:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:57.909 19:23:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.909 19:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.909 19:23:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:58.167 19:23:00 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:58.167 19:23:00 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:58.167 19:23:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:58.167 19:23:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:58.167 19:23:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:58.167 19:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.167 19:23:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:58.425 19:23:00 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:58.425 19:23:00 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:58.425 19:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:58.425 19:23:00 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:58.425 19:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:58.682 19:23:01 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:58.682 19:23:01 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:58.682 19:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:58.940 19:23:01 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:58.940 19:23:01 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ejc2zBJLig 00:30:58.940 19:23:01 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ejc2zBJLig 00:30:58.940 19:23:01 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:58.940 19:23:01 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ejc2zBJLig 00:30:58.940 19:23:01 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:58.940 19:23:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:58.940 19:23:01 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:58.940 19:23:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:58.940 19:23:01 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ejc2zBJLig 00:30:58.940 19:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ejc2zBJLig 00:30:58.940 [2024-07-12 19:23:01.483108] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ejc2zBJLig': 0100660 00:30:58.940 [2024-07-12 19:23:01.483132] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:58.940 request: 00:30:58.940 { 00:30:58.940 "name": "key0", 00:30:58.940 "path": "/tmp/tmp.ejc2zBJLig", 00:30:58.940 "method": "keyring_file_add_key", 00:30:58.940 "req_id": 1 00:30:58.940 } 00:30:58.940 Got JSON-RPC error response 00:30:58.940 response: 00:30:58.940 { 00:30:58.940 "code": -1, 00:30:58.940 "message": "Operation not permitted" 00:30:58.940 } 00:30:59.198 19:23:01 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:59.198 19:23:01 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:59.198 19:23:01 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:59.198 19:23:01 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:59.198 19:23:01 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ejc2zBJLig 00:30:59.198 19:23:01 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ejc2zBJLig 00:30:59.198 19:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ejc2zBJLig 00:30:59.198 19:23:01 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ejc2zBJLig 00:30:59.198 19:23:01 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:59.198 19:23:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:59.198 19:23:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:59.198 19:23:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:59.198 19:23:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:59.198 19:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:59.456 19:23:01 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:59.456 19:23:01 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.456 19:23:01 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:59.456 19:23:01 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.456 19:23:01 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:59.456 19:23:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.456 19:23:01 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:59.456 19:23:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:59.456 19:23:01 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.456 19:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.714 [2024-07-12 19:23:02.060630] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ejc2zBJLig': No such file or directory 00:30:59.714 [2024-07-12 19:23:02.060655] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:59.714 [2024-07-12 19:23:02.060675] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:59.714 [2024-07-12 19:23:02.060681] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:59.714 [2024-07-12 19:23:02.060686] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:59.714 request: 00:30:59.714 { 00:30:59.714 "name": "nvme0", 00:30:59.714 "trtype": "tcp", 00:30:59.714 "traddr": "127.0.0.1", 00:30:59.714 "adrfam": "ipv4", 00:30:59.714 "trsvcid": "4420", 00:30:59.714 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:59.714 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:59.714 "prchk_reftag": false, 00:30:59.714 "prchk_guard": false, 00:30:59.714 "hdgst": false, 00:30:59.714 "ddgst": false, 00:30:59.714 "psk": "key0", 00:30:59.714 "method": "bdev_nvme_attach_controller", 00:30:59.714 "req_id": 1 00:30:59.714 } 00:30:59.714 Got JSON-RPC error response 00:30:59.714 response: 00:30:59.714 { 00:30:59.714 "code": -19, 00:30:59.714 "message": "No such device" 00:30:59.714 } 00:30:59.714 19:23:02 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:59.714 19:23:02 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:59.714 19:23:02 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:59.714 19:23:02 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:59.714 19:23:02 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:59.714 19:23:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:59.714 19:23:02 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:59.714 19:23:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:59.714 19:23:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:59.714 19:23:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:59.714 19:23:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:59.714 19:23:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:59.714 19:23:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ImRnAvyt1H 00:30:59.714 19:23:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:59.714 19:23:02 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:59.714 19:23:02 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:59.714 19:23:02 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:59.714 19:23:02 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:59.714 19:23:02 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:59.714 19:23:02 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:59.972 19:23:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ImRnAvyt1H 00:30:59.972 19:23:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ImRnAvyt1H 00:30:59.972 19:23:02 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.ImRnAvyt1H 00:30:59.972 19:23:02 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ImRnAvyt1H 00:30:59.972 19:23:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ImRnAvyt1H 00:30:59.972 19:23:02 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.972 19:23:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:00.231 nvme0n1 00:31:00.231 19:23:02 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:00.231 19:23:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:00.231 19:23:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:00.231 19:23:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.231 19:23:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:00.231 19:23:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.491 19:23:02 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:00.491 19:23:02 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:00.491 19:23:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:00.749 19:23:03 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:00.749 19:23:03 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:00.749 19:23:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.749 19:23:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:00.749 19:23:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.749 19:23:03 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:00.749 19:23:03 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:00.749 19:23:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:00.749 19:23:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:00.749 19:23:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.749 19:23:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.749 19:23:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:01.007 19:23:03 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:01.007 19:23:03 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:01.007 19:23:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:01.264 19:23:03 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:01.264 19:23:03 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:01.264 19:23:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.264 19:23:03 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:01.264 19:23:03 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ImRnAvyt1H 00:31:01.264 19:23:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ImRnAvyt1H 00:31:01.525 19:23:03 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BF6BEqn4ts 00:31:01.525 19:23:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BF6BEqn4ts 00:31:01.784 19:23:04 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:01.784 19:23:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:02.042 nvme0n1 00:31:02.042 19:23:04 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:02.042 19:23:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:02.301 19:23:04 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:02.301 "subsystems": [ 00:31:02.301 { 00:31:02.301 "subsystem": "keyring", 00:31:02.301 "config": [ 00:31:02.301 { 00:31:02.301 "method": "keyring_file_add_key", 00:31:02.301 "params": { 00:31:02.301 "name": "key0", 00:31:02.301 "path": "/tmp/tmp.ImRnAvyt1H" 00:31:02.301 } 00:31:02.301 }, 00:31:02.301 { 00:31:02.301 "method": "keyring_file_add_key", 00:31:02.301 "params": { 00:31:02.301 "name": "key1", 00:31:02.301 "path": "/tmp/tmp.BF6BEqn4ts" 00:31:02.301 } 00:31:02.301 } 00:31:02.301 ] 00:31:02.301 }, 00:31:02.301 { 00:31:02.301 "subsystem": "iobuf", 00:31:02.301 "config": [ 00:31:02.301 { 00:31:02.301 "method": "iobuf_set_options", 00:31:02.301 "params": { 00:31:02.301 "small_pool_count": 8192, 00:31:02.301 "large_pool_count": 1024, 00:31:02.301 "small_bufsize": 8192, 00:31:02.301 "large_bufsize": 135168 00:31:02.301 } 00:31:02.301 } 00:31:02.301 ] 00:31:02.301 }, 00:31:02.301 { 00:31:02.301 "subsystem": "sock", 00:31:02.301 "config": [ 00:31:02.301 { 00:31:02.301 "method": "sock_set_default_impl", 00:31:02.302 "params": { 00:31:02.302 "impl_name": "posix" 00:31:02.302 } 00:31:02.302 }, 00:31:02.302 { 00:31:02.302 "method": "sock_impl_set_options", 00:31:02.302 "params": { 00:31:02.302 "impl_name": "ssl", 00:31:02.302 "recv_buf_size": 4096, 00:31:02.302 "send_buf_size": 4096, 00:31:02.302 "enable_recv_pipe": true, 00:31:02.302 "enable_quickack": false, 00:31:02.302 "enable_placement_id": 0, 00:31:02.302 "enable_zerocopy_send_server": true, 00:31:02.302 "enable_zerocopy_send_client": false, 00:31:02.302 "zerocopy_threshold": 0, 00:31:02.302 "tls_version": 0, 00:31:02.302 "enable_ktls": false 00:31:02.302 } 00:31:02.302 }, 00:31:02.302 { 00:31:02.302 "method": "sock_impl_set_options", 00:31:02.302 "params": { 00:31:02.302 "impl_name": "posix", 00:31:02.302 "recv_buf_size": 2097152, 00:31:02.302 "send_buf_size": 2097152, 00:31:02.302 "enable_recv_pipe": true, 00:31:02.302 "enable_quickack": false, 00:31:02.302 "enable_placement_id": 0, 00:31:02.302 "enable_zerocopy_send_server": true, 00:31:02.302 "enable_zerocopy_send_client": false, 00:31:02.302 "zerocopy_threshold": 0, 00:31:02.302 "tls_version": 0, 00:31:02.302 "enable_ktls": false 00:31:02.302 } 00:31:02.302 } 00:31:02.302 ] 00:31:02.302 }, 00:31:02.302 { 00:31:02.302 "subsystem": "vmd", 00:31:02.302 "config": [] 00:31:02.302 }, 00:31:02.302 { 00:31:02.302 "subsystem": "accel", 00:31:02.302 "config": [ 00:31:02.302 { 00:31:02.302 "method": "accel_set_options", 00:31:02.302 "params": { 00:31:02.302 "small_cache_size": 128, 00:31:02.302 "large_cache_size": 16, 00:31:02.302 "task_count": 2048, 00:31:02.302 "sequence_count": 2048, 00:31:02.302 "buf_count": 2048 00:31:02.302 } 00:31:02.302 } 00:31:02.302 ] 00:31:02.302 }, 00:31:02.302 { 00:31:02.302 "subsystem": "bdev", 00:31:02.302 "config": [ 00:31:02.302 { 00:31:02.302 "method": "bdev_set_options", 00:31:02.302 "params": { 00:31:02.302 "bdev_io_pool_size": 65535, 00:31:02.302 "bdev_io_cache_size": 256, 00:31:02.302 "bdev_auto_examine": true, 00:31:02.302 "iobuf_small_cache_size": 128, 00:31:02.302 "iobuf_large_cache_size": 16 00:31:02.302 } 00:31:02.302 }, 00:31:02.302 { 00:31:02.302 "method": "bdev_raid_set_options", 00:31:02.302 "params": { 00:31:02.302 "process_window_size_kb": 1024 00:31:02.302 } 00:31:02.302 }, 00:31:02.302 { 00:31:02.302 "method": "bdev_iscsi_set_options", 00:31:02.302 "params": { 00:31:02.302 "timeout_sec": 30 00:31:02.302 } 00:31:02.302 }, 00:31:02.302 { 00:31:02.302 "method": "bdev_nvme_set_options", 00:31:02.302 "params": { 00:31:02.302 "action_on_timeout": "none", 00:31:02.302 "timeout_us": 0, 00:31:02.302 "timeout_admin_us": 0, 00:31:02.302 "keep_alive_timeout_ms": 10000, 00:31:02.302 "arbitration_burst": 0, 00:31:02.302 "low_priority_weight": 0, 00:31:02.302 "medium_priority_weight": 0, 00:31:02.302 "high_priority_weight": 0, 00:31:02.302 "nvme_adminq_poll_period_us": 10000, 00:31:02.302 "nvme_ioq_poll_period_us": 0, 00:31:02.302 "io_queue_requests": 512, 00:31:02.302 "delay_cmd_submit": true, 00:31:02.302 "transport_retry_count": 4, 00:31:02.302 "bdev_retry_count": 3, 00:31:02.302 "transport_ack_timeout": 0, 00:31:02.302 "ctrlr_loss_timeout_sec": 0, 00:31:02.302 "reconnect_delay_sec": 0, 00:31:02.302 "fast_io_fail_timeout_sec": 0, 00:31:02.302 "disable_auto_failback": false, 00:31:02.302 "generate_uuids": false, 00:31:02.302 "transport_tos": 0, 00:31:02.302 "nvme_error_stat": false, 00:31:02.302 "rdma_srq_size": 0, 00:31:02.302 "io_path_stat": false, 00:31:02.302 "allow_accel_sequence": false, 00:31:02.302 "rdma_max_cq_size": 0, 00:31:02.302 "rdma_cm_event_timeout_ms": 0, 00:31:02.302 "dhchap_digests": [ 00:31:02.302 "sha256", 00:31:02.302 "sha384", 00:31:02.302 "sha512" 00:31:02.302 ], 00:31:02.302 "dhchap_dhgroups": [ 00:31:02.302 "null", 00:31:02.302 "ffdhe2048", 00:31:02.302 "ffdhe3072", 00:31:02.302 "ffdhe4096", 00:31:02.302 "ffdhe6144", 00:31:02.302 "ffdhe8192" 00:31:02.302 ] 00:31:02.302 } 00:31:02.302 }, 00:31:02.302 { 00:31:02.302 "method": "bdev_nvme_attach_controller", 00:31:02.302 "params": { 00:31:02.302 "name": "nvme0", 00:31:02.302 "trtype": "TCP", 00:31:02.302 "adrfam": "IPv4", 00:31:02.302 "traddr": "127.0.0.1", 00:31:02.302 "trsvcid": "4420", 00:31:02.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.302 "prchk_reftag": false, 00:31:02.302 "prchk_guard": false, 00:31:02.302 "ctrlr_loss_timeout_sec": 0, 00:31:02.302 "reconnect_delay_sec": 0, 00:31:02.302 "fast_io_fail_timeout_sec": 0, 00:31:02.302 "psk": "key0", 00:31:02.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:02.302 "hdgst": false, 00:31:02.302 "ddgst": false 00:31:02.302 } 00:31:02.302 }, 00:31:02.302 { 00:31:02.302 "method": "bdev_nvme_set_hotplug", 00:31:02.302 "params": { 00:31:02.302 "period_us": 100000, 00:31:02.302 "enable": false 00:31:02.302 } 00:31:02.302 }, 00:31:02.302 { 00:31:02.302 "method": "bdev_wait_for_examine" 00:31:02.302 } 00:31:02.302 ] 00:31:02.302 }, 00:31:02.302 { 00:31:02.302 "subsystem": "nbd", 00:31:02.302 "config": [] 00:31:02.302 } 00:31:02.302 ] 00:31:02.302 }' 00:31:02.302 19:23:04 keyring_file -- keyring/file.sh@114 -- # killprocess 505659 00:31:02.302 19:23:04 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 505659 ']' 00:31:02.302 19:23:04 keyring_file -- common/autotest_common.sh@952 -- # kill -0 505659 00:31:02.302 19:23:04 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:02.302 19:23:04 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:02.302 19:23:04 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 505659 00:31:02.302 19:23:04 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:02.302 19:23:04 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:02.302 19:23:04 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 505659' 00:31:02.302 killing process with pid 505659 00:31:02.302 19:23:04 keyring_file -- common/autotest_common.sh@967 -- # kill 505659 00:31:02.302 Received shutdown signal, test time was about 1.000000 seconds 00:31:02.302 00:31:02.302 Latency(us) 00:31:02.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.302 =================================================================================================================== 00:31:02.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:02.302 19:23:04 keyring_file -- common/autotest_common.sh@972 -- # wait 505659 00:31:02.562 19:23:04 keyring_file -- keyring/file.sh@117 -- # bperfpid=507281 00:31:02.562 19:23:04 keyring_file -- keyring/file.sh@119 -- # waitforlisten 507281 /var/tmp/bperf.sock 00:31:02.562 19:23:04 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 507281 ']' 00:31:02.562 19:23:04 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:02.562 19:23:04 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:02.562 19:23:04 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:02.562 19:23:04 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:02.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:02.562 19:23:04 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:02.562 "subsystems": [ 00:31:02.562 { 00:31:02.562 "subsystem": "keyring", 00:31:02.562 "config": [ 00:31:02.562 { 00:31:02.562 "method": "keyring_file_add_key", 00:31:02.562 "params": { 00:31:02.562 "name": "key0", 00:31:02.562 "path": "/tmp/tmp.ImRnAvyt1H" 00:31:02.562 } 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "method": "keyring_file_add_key", 00:31:02.562 "params": { 00:31:02.562 "name": "key1", 00:31:02.562 "path": "/tmp/tmp.BF6BEqn4ts" 00:31:02.562 } 00:31:02.562 } 00:31:02.562 ] 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "subsystem": "iobuf", 00:31:02.562 "config": [ 00:31:02.562 { 00:31:02.562 "method": "iobuf_set_options", 00:31:02.562 "params": { 00:31:02.562 "small_pool_count": 8192, 00:31:02.562 "large_pool_count": 1024, 00:31:02.562 "small_bufsize": 8192, 00:31:02.562 "large_bufsize": 135168 00:31:02.562 } 00:31:02.562 } 00:31:02.562 ] 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "subsystem": "sock", 00:31:02.562 "config": [ 00:31:02.562 { 00:31:02.562 "method": "sock_set_default_impl", 00:31:02.562 "params": { 00:31:02.562 "impl_name": "posix" 00:31:02.562 } 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "method": "sock_impl_set_options", 00:31:02.562 "params": { 00:31:02.562 "impl_name": "ssl", 00:31:02.562 "recv_buf_size": 4096, 00:31:02.562 "send_buf_size": 4096, 00:31:02.562 "enable_recv_pipe": true, 00:31:02.562 "enable_quickack": false, 00:31:02.562 "enable_placement_id": 0, 00:31:02.562 "enable_zerocopy_send_server": true, 00:31:02.562 "enable_zerocopy_send_client": false, 00:31:02.562 "zerocopy_threshold": 0, 00:31:02.562 "tls_version": 0, 00:31:02.562 "enable_ktls": false 00:31:02.562 } 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "method": "sock_impl_set_options", 00:31:02.562 "params": { 00:31:02.562 "impl_name": "posix", 00:31:02.562 "recv_buf_size": 2097152, 00:31:02.562 "send_buf_size": 2097152, 00:31:02.562 "enable_recv_pipe": true, 00:31:02.562 "enable_quickack": false, 00:31:02.562 "enable_placement_id": 0, 00:31:02.562 "enable_zerocopy_send_server": true, 00:31:02.562 "enable_zerocopy_send_client": false, 00:31:02.562 "zerocopy_threshold": 0, 00:31:02.562 "tls_version": 0, 00:31:02.562 "enable_ktls": false 00:31:02.562 } 00:31:02.562 } 00:31:02.562 ] 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "subsystem": "vmd", 00:31:02.562 "config": [] 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "subsystem": "accel", 00:31:02.562 "config": [ 00:31:02.562 { 00:31:02.562 "method": "accel_set_options", 00:31:02.562 "params": { 00:31:02.562 "small_cache_size": 128, 00:31:02.562 "large_cache_size": 16, 00:31:02.562 "task_count": 2048, 00:31:02.562 "sequence_count": 2048, 00:31:02.562 "buf_count": 2048 00:31:02.562 } 00:31:02.562 } 00:31:02.562 ] 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "subsystem": "bdev", 00:31:02.562 "config": [ 00:31:02.562 { 00:31:02.562 "method": "bdev_set_options", 00:31:02.562 "params": { 00:31:02.562 "bdev_io_pool_size": 65535, 00:31:02.562 "bdev_io_cache_size": 256, 00:31:02.562 "bdev_auto_examine": true, 00:31:02.562 "iobuf_small_cache_size": 128, 00:31:02.562 "iobuf_large_cache_size": 16 00:31:02.562 } 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "method": "bdev_raid_set_options", 00:31:02.562 "params": { 00:31:02.562 "process_window_size_kb": 1024 00:31:02.562 } 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "method": "bdev_iscsi_set_options", 00:31:02.562 "params": { 00:31:02.562 "timeout_sec": 30 00:31:02.562 } 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "method": "bdev_nvme_set_options", 00:31:02.562 "params": { 00:31:02.562 "action_on_timeout": "none", 00:31:02.562 "timeout_us": 0, 00:31:02.562 "timeout_admin_us": 0, 00:31:02.562 "keep_alive_timeout_ms": 10000, 00:31:02.562 "arbitration_burst": 0, 00:31:02.562 "low_priority_weight": 0, 00:31:02.562 "medium_priority_weight": 0, 00:31:02.562 "high_priority_weight": 0, 00:31:02.562 "nvme_adminq_poll_period_us": 10000, 00:31:02.562 "nvme_ioq_poll_period_us": 0, 00:31:02.562 "io_queue_requests": 512, 00:31:02.562 "delay_cmd_submit": true, 00:31:02.562 "transport_retry_count": 4, 00:31:02.562 "bdev_retry_count": 3, 00:31:02.562 "transport_ack_timeout": 0, 00:31:02.562 "ctrlr_loss_timeout_sec": 0, 00:31:02.562 "reconnect_delay_sec": 0, 00:31:02.562 "fast_io_fail_timeout_sec": 0, 00:31:02.562 "disable_auto_failback": false, 00:31:02.562 "generate_uuids": false, 00:31:02.562 "transport_tos": 0, 00:31:02.562 "nvme_error_stat": false, 00:31:02.562 "rdma_srq_size": 0, 00:31:02.562 "io_path_stat": false, 00:31:02.562 "allow_accel_sequence": false, 00:31:02.562 "rdma_max_cq_size": 0, 00:31:02.562 "rdma_cm_event_timeout_ms": 0, 00:31:02.562 "dhchap_digests": [ 00:31:02.562 "sha256", 00:31:02.562 "sha384", 00:31:02.562 "sha512" 00:31:02.562 ], 00:31:02.562 "dhchap_dhgroups": [ 00:31:02.562 "null", 00:31:02.562 "ffdhe2048", 00:31:02.562 "ffdhe3072", 00:31:02.562 "ffdhe4096", 00:31:02.562 "ffdhe6144", 00:31:02.562 "ffdhe8192" 00:31:02.562 ] 00:31:02.562 } 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "method": "bdev_nvme_attach_controller", 00:31:02.562 "params": { 00:31:02.562 "name": "nvme0", 00:31:02.562 "trtype": "TCP", 00:31:02.562 "adrfam": "IPv4", 00:31:02.562 "traddr": "127.0.0.1", 00:31:02.562 "trsvcid": "4420", 00:31:02.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.562 "prchk_reftag": false, 00:31:02.562 "prchk_guard": false, 00:31:02.562 "ctrlr_loss_timeout_sec": 0, 00:31:02.562 "reconnect_delay_sec": 0, 00:31:02.562 "fast_io_fail_timeout_sec": 0, 00:31:02.562 "psk": "key0", 00:31:02.562 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:02.562 "hdgst": false, 00:31:02.562 "ddgst": false 00:31:02.562 } 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "method": "bdev_nvme_set_hotplug", 00:31:02.562 "params": { 00:31:02.562 "period_us": 100000, 00:31:02.562 "enable": false 00:31:02.562 } 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "method": "bdev_wait_for_examine" 00:31:02.562 } 00:31:02.562 ] 00:31:02.562 }, 00:31:02.562 { 00:31:02.562 "subsystem": "nbd", 00:31:02.562 "config": [] 00:31:02.562 } 00:31:02.562 ] 00:31:02.562 }' 00:31:02.562 19:23:04 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:02.562 19:23:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:02.562 [2024-07-12 19:23:04.923124] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:31:02.562 [2024-07-12 19:23:04.923170] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507281 ] 00:31:02.563 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.563 [2024-07-12 19:23:04.991985] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.563 [2024-07-12 19:23:05.072127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.821 [2024-07-12 19:23:05.230394] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:03.389 19:23:05 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:03.389 19:23:05 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:03.389 19:23:05 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:03.389 19:23:05 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:03.389 19:23:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.389 19:23:05 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:03.389 19:23:05 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:03.389 19:23:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:03.389 19:23:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:03.389 19:23:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:03.389 19:23:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:03.389 19:23:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.647 19:23:06 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:03.647 19:23:06 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:03.647 19:23:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:03.647 19:23:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:03.647 19:23:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:03.647 19:23:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.647 19:23:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:03.906 19:23:06 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:03.906 19:23:06 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:03.906 19:23:06 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:03.906 19:23:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:04.165 19:23:06 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:04.165 19:23:06 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:04.165 19:23:06 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.ImRnAvyt1H /tmp/tmp.BF6BEqn4ts 00:31:04.165 19:23:06 keyring_file -- keyring/file.sh@20 -- # killprocess 507281 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 507281 ']' 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@952 -- # kill -0 507281 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 507281 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 507281' 00:31:04.165 killing process with pid 507281 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@967 -- # kill 507281 00:31:04.165 Received shutdown signal, test time was about 1.000000 seconds 00:31:04.165 00:31:04.165 Latency(us) 00:31:04.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.165 =================================================================================================================== 00:31:04.165 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@972 -- # wait 507281 00:31:04.165 19:23:06 keyring_file -- keyring/file.sh@21 -- # killprocess 505565 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 505565 ']' 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@952 -- # kill -0 505565 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:04.165 19:23:06 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 505565 00:31:04.423 19:23:06 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:04.423 19:23:06 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:04.423 19:23:06 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 505565' 00:31:04.423 killing process with pid 505565 00:31:04.423 19:23:06 keyring_file -- common/autotest_common.sh@967 -- # kill 505565 00:31:04.423 [2024-07-12 19:23:06.762031] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:04.423 19:23:06 keyring_file -- common/autotest_common.sh@972 -- # wait 505565 00:31:04.682 00:31:04.682 real 0m12.204s 00:31:04.682 user 0m29.551s 00:31:04.682 sys 0m2.649s 00:31:04.682 19:23:07 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:04.682 19:23:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:04.682 ************************************ 00:31:04.682 END TEST keyring_file 00:31:04.682 ************************************ 00:31:04.682 19:23:07 -- common/autotest_common.sh@1142 -- # return 0 00:31:04.682 19:23:07 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:31:04.682 19:23:07 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:04.682 19:23:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:04.682 19:23:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:04.682 19:23:07 -- common/autotest_common.sh@10 -- # set +x 00:31:04.682 ************************************ 00:31:04.682 START TEST keyring_linux 00:31:04.682 ************************************ 00:31:04.682 19:23:07 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:04.682 * Looking for test storage... 00:31:04.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:04.682 19:23:07 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:04.682 19:23:07 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.682 19:23:07 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.682 19:23:07 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.682 19:23:07 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.682 19:23:07 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.682 19:23:07 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.682 19:23:07 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.682 19:23:07 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.682 19:23:07 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:04.683 19:23:07 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.683 19:23:07 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:31:04.683 19:23:07 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:04.683 19:23:07 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:04.683 19:23:07 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.683 19:23:07 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.683 19:23:07 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.683 19:23:07 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:04.683 19:23:07 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:04.683 19:23:07 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:04.683 19:23:07 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:04.683 19:23:07 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:04.941 19:23:07 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:04.941 19:23:07 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:04.941 19:23:07 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:04.941 19:23:07 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:04.941 19:23:07 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:04.941 19:23:07 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:04.941 19:23:07 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:04.941 19:23:07 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:04.941 19:23:07 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:04.941 19:23:07 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:04.941 19:23:07 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:04.941 /tmp/:spdk-test:key0 00:31:04.941 19:23:07 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:04.941 19:23:07 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:04.941 19:23:07 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:04.941 19:23:07 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:04.941 19:23:07 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:04.941 19:23:07 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:04.941 19:23:07 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:04.941 19:23:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:04.941 /tmp/:spdk-test:key1 00:31:04.941 19:23:07 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=507641 00:31:04.941 19:23:07 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 507641 00:31:04.941 19:23:07 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:04.941 19:23:07 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 507641 ']' 00:31:04.941 19:23:07 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.941 19:23:07 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:04.941 19:23:07 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.941 19:23:07 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:04.941 19:23:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:04.941 [2024-07-12 19:23:07.390102] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:31:04.941 [2024-07-12 19:23:07.390157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507641 ] 00:31:04.941 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.941 [2024-07-12 19:23:07.458274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.199 [2024-07-12 19:23:07.538059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.766 19:23:08 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:05.766 19:23:08 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:31:05.766 19:23:08 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:05.766 19:23:08 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.766 19:23:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:05.766 [2024-07-12 19:23:08.201905] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.766 null0 00:31:05.766 [2024-07-12 19:23:08.233954] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:05.766 [2024-07-12 19:23:08.234280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:05.766 19:23:08 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.766 19:23:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:05.766 463349336 00:31:05.766 19:23:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:05.766 1024309772 00:31:05.766 19:23:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=507872 00:31:05.766 19:23:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 507872 /var/tmp/bperf.sock 00:31:05.766 19:23:08 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 507872 ']' 00:31:05.766 19:23:08 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:05.766 19:23:08 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:05.766 19:23:08 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:05.766 19:23:08 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:05.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:05.766 19:23:08 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:05.766 19:23:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:05.766 [2024-07-12 19:23:08.316399] Starting SPDK v24.09-pre git sha1 5f33ec93a / DPDK 24.03.0 initialization... 00:31:05.766 [2024-07-12 19:23:08.316440] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507872 ] 00:31:06.025 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.025 [2024-07-12 19:23:08.384648] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.025 [2024-07-12 19:23:08.457336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.592 19:23:09 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:06.592 19:23:09 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:31:06.592 19:23:09 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:06.592 19:23:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:06.851 19:23:09 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:06.851 19:23:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:07.109 19:23:09 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:07.109 19:23:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:07.109 [2024-07-12 19:23:09.652944] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:07.368 nvme0n1 00:31:07.368 19:23:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:07.368 19:23:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:07.368 19:23:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:07.368 19:23:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:07.368 19:23:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:07.368 19:23:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:07.368 19:23:09 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:07.368 19:23:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:07.368 19:23:09 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:07.368 19:23:09 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:07.368 19:23:09 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:07.368 19:23:09 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:07.368 19:23:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:07.627 19:23:10 keyring_linux -- keyring/linux.sh@25 -- # sn=463349336 00:31:07.627 19:23:10 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:07.627 19:23:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:07.627 19:23:10 keyring_linux -- keyring/linux.sh@26 -- # [[ 463349336 == \4\6\3\3\4\9\3\3\6 ]] 00:31:07.627 19:23:10 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 463349336 00:31:07.627 19:23:10 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:07.627 19:23:10 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:07.886 Running I/O for 1 seconds... 00:31:08.822 00:31:08.822 Latency(us) 00:31:08.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.822 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:08.823 nvme0n1 : 1.01 21032.61 82.16 0.00 0.00 6063.42 4900.95 10029.86 00:31:08.823 =================================================================================================================== 00:31:08.823 Total : 21032.61 82.16 0.00 0.00 6063.42 4900.95 10029.86 00:31:08.823 0 00:31:08.823 19:23:11 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:08.823 19:23:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:09.082 19:23:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:09.082 19:23:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:09.082 19:23:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:09.082 19:23:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:09.082 19:23:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:09.082 19:23:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:09.082 19:23:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:09.082 19:23:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:09.082 19:23:11 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:09.082 19:23:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:09.082 19:23:11 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:31:09.082 19:23:11 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:09.082 19:23:11 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:09.082 19:23:11 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:09.082 19:23:11 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:09.082 19:23:11 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:09.082 19:23:11 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:09.082 19:23:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:09.341 [2024-07-12 19:23:11.773861] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:09.341 [2024-07-12 19:23:11.774110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebbfd0 (107): Transport endpoint is not connected 00:31:09.341 [2024-07-12 19:23:11.775106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebbfd0 (9): Bad file descriptor 00:31:09.341 [2024-07-12 19:23:11.776108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:09.341 [2024-07-12 19:23:11.776118] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:09.341 [2024-07-12 19:23:11.776125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:09.341 request: 00:31:09.341 { 00:31:09.341 "name": "nvme0", 00:31:09.341 "trtype": "tcp", 00:31:09.341 "traddr": "127.0.0.1", 00:31:09.341 "adrfam": "ipv4", 00:31:09.341 "trsvcid": "4420", 00:31:09.341 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:09.341 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:09.341 "prchk_reftag": false, 00:31:09.341 "prchk_guard": false, 00:31:09.341 "hdgst": false, 00:31:09.341 "ddgst": false, 00:31:09.341 "psk": ":spdk-test:key1", 00:31:09.341 "method": "bdev_nvme_attach_controller", 00:31:09.341 "req_id": 1 00:31:09.341 } 00:31:09.341 Got JSON-RPC error response 00:31:09.341 response: 00:31:09.341 { 00:31:09.341 "code": -5, 00:31:09.341 "message": "Input/output error" 00:31:09.341 } 00:31:09.341 19:23:11 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:31:09.341 19:23:11 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:09.341 19:23:11 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:09.341 19:23:11 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@33 -- # sn=463349336 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 463349336 00:31:09.341 1 links removed 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@33 -- # sn=1024309772 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1024309772 00:31:09.341 1 links removed 00:31:09.341 19:23:11 keyring_linux -- keyring/linux.sh@41 -- # killprocess 507872 00:31:09.341 19:23:11 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 507872 ']' 00:31:09.341 19:23:11 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 507872 00:31:09.341 19:23:11 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:09.341 19:23:11 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:09.341 19:23:11 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 507872 00:31:09.341 19:23:11 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:09.341 19:23:11 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:09.341 19:23:11 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 507872' 00:31:09.341 killing process with pid 507872 00:31:09.341 19:23:11 keyring_linux -- common/autotest_common.sh@967 -- # kill 507872 00:31:09.341 Received shutdown signal, test time was about 1.000000 seconds 00:31:09.341 00:31:09.341 Latency(us) 00:31:09.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.341 =================================================================================================================== 00:31:09.341 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:09.341 19:23:11 keyring_linux -- common/autotest_common.sh@972 -- # wait 507872 00:31:09.601 19:23:12 keyring_linux -- keyring/linux.sh@42 -- # killprocess 507641 00:31:09.601 19:23:12 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 507641 ']' 00:31:09.601 19:23:12 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 507641 00:31:09.601 19:23:12 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:09.601 19:23:12 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:09.601 19:23:12 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 507641 00:31:09.601 19:23:12 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:09.601 19:23:12 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:09.601 19:23:12 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 507641' 00:31:09.601 killing process with pid 507641 00:31:09.601 19:23:12 keyring_linux -- common/autotest_common.sh@967 -- # kill 507641 00:31:09.601 19:23:12 keyring_linux -- common/autotest_common.sh@972 -- # wait 507641 00:31:09.860 00:31:09.860 real 0m5.263s 00:31:09.860 user 0m9.697s 00:31:09.860 sys 0m1.488s 00:31:09.860 19:23:12 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:09.860 19:23:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:09.860 ************************************ 00:31:09.860 END TEST keyring_linux 00:31:09.860 ************************************ 00:31:10.119 19:23:12 -- common/autotest_common.sh@1142 -- # return 0 00:31:10.119 19:23:12 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:10.119 19:23:12 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:10.119 19:23:12 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:10.119 19:23:12 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:31:10.119 19:23:12 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:31:10.120 19:23:12 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:10.120 19:23:12 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:10.120 19:23:12 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:10.120 19:23:12 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:10.120 19:23:12 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:10.120 19:23:12 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:10.120 19:23:12 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:10.120 19:23:12 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:10.120 19:23:12 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:10.120 19:23:12 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:10.120 19:23:12 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:10.120 19:23:12 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:10.120 19:23:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:10.120 19:23:12 -- common/autotest_common.sh@10 -- # set +x 00:31:10.120 19:23:12 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:10.120 19:23:12 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:10.120 19:23:12 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:10.120 19:23:12 -- common/autotest_common.sh@10 -- # set +x 00:31:15.393 INFO: APP EXITING 00:31:15.393 INFO: killing all VMs 00:31:15.393 INFO: killing vhost app 00:31:15.393 INFO: EXIT DONE 00:31:17.931 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:31:17.931 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:31:17.931 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:31:20.467 Cleaning 00:31:20.726 Removing: /var/run/dpdk/spdk0/config 00:31:20.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:20.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:20.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:20.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:20.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:31:20.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:31:20.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:31:20.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:31:20.726 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:20.726 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:20.726 Removing: /var/run/dpdk/spdk1/config 00:31:20.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:20.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:20.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:20.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:20.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:31:20.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:31:20.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:31:20.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:31:20.726 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:20.726 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:20.726 Removing: /var/run/dpdk/spdk1/mp_socket 00:31:20.726 Removing: /var/run/dpdk/spdk2/config 00:31:20.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:20.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:20.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:20.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:20.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:31:20.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:31:20.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:31:20.726 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:31:20.726 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:20.726 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:20.726 Removing: /var/run/dpdk/spdk3/config 00:31:20.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:20.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:20.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:20.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:20.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:31:20.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:31:20.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:31:20.726 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:31:20.726 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:20.726 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:20.726 Removing: /var/run/dpdk/spdk4/config 00:31:20.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:20.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:20.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:20.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:20.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:31:20.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:31:20.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:31:20.726 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:31:20.726 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:20.726 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:20.726 Removing: /dev/shm/bdev_svc_trace.1 00:31:20.726 Removing: /dev/shm/nvmf_trace.0 00:31:20.726 Removing: /dev/shm/spdk_tgt_trace.pid117586 00:31:20.726 Removing: /var/run/dpdk/spdk0 00:31:20.726 Removing: /var/run/dpdk/spdk1 00:31:20.726 Removing: /var/run/dpdk/spdk2 00:31:20.726 Removing: /var/run/dpdk/spdk3 00:31:20.726 Removing: /var/run/dpdk/spdk4 00:31:20.726 Removing: /var/run/dpdk/spdk_pid115452 00:31:20.726 Removing: /var/run/dpdk/spdk_pid116518 00:31:20.726 Removing: /var/run/dpdk/spdk_pid117586 00:31:20.726 Removing: /var/run/dpdk/spdk_pid118222 00:31:20.726 Removing: /var/run/dpdk/spdk_pid119171 00:31:20.726 Removing: /var/run/dpdk/spdk_pid119407 00:31:20.726 Removing: /var/run/dpdk/spdk_pid120378 00:31:20.985 Removing: /var/run/dpdk/spdk_pid120611 00:31:20.985 Removing: /var/run/dpdk/spdk_pid120845 00:31:20.985 Removing: /var/run/dpdk/spdk_pid122582 00:31:20.985 Removing: /var/run/dpdk/spdk_pid124018 00:31:20.985 Removing: /var/run/dpdk/spdk_pid124295 00:31:20.985 Removing: /var/run/dpdk/spdk_pid124600 00:31:20.985 Removing: /var/run/dpdk/spdk_pid125024 00:31:20.985 Removing: /var/run/dpdk/spdk_pid125389 00:31:20.985 Removing: /var/run/dpdk/spdk_pid125599 00:31:20.985 Removing: /var/run/dpdk/spdk_pid125796 00:31:20.985 Removing: /var/run/dpdk/spdk_pid126090 00:31:20.985 Removing: /var/run/dpdk/spdk_pid126918 00:31:20.985 Removing: /var/run/dpdk/spdk_pid129910 00:31:20.985 Removing: /var/run/dpdk/spdk_pid130185 00:31:20.985 Removing: /var/run/dpdk/spdk_pid130497 00:31:20.985 Removing: /var/run/dpdk/spdk_pid130665 00:31:20.985 Removing: /var/run/dpdk/spdk_pid131153 00:31:20.985 Removing: /var/run/dpdk/spdk_pid131339 00:31:20.985 Removing: /var/run/dpdk/spdk_pid131660 00:31:20.985 Removing: /var/run/dpdk/spdk_pid131891 00:31:20.985 Removing: /var/run/dpdk/spdk_pid132153 00:31:20.985 Removing: /var/run/dpdk/spdk_pid132312 00:31:20.985 Removing: /var/run/dpdk/spdk_pid132431 00:31:20.985 Removing: /var/run/dpdk/spdk_pid132654 00:31:20.985 Removing: /var/run/dpdk/spdk_pid133199 00:31:20.985 Removing: /var/run/dpdk/spdk_pid133400 00:31:20.985 Removing: /var/run/dpdk/spdk_pid133721 00:31:20.985 Removing: /var/run/dpdk/spdk_pid133990 00:31:20.985 Removing: /var/run/dpdk/spdk_pid134039 00:31:20.985 Removing: /var/run/dpdk/spdk_pid134112 00:31:20.985 Removing: /var/run/dpdk/spdk_pid134362 00:31:20.985 Removing: /var/run/dpdk/spdk_pid134617 00:31:20.985 Removing: /var/run/dpdk/spdk_pid134888 00:31:20.985 Removing: /var/run/dpdk/spdk_pid135157 00:31:20.985 Removing: /var/run/dpdk/spdk_pid135445 00:31:20.985 Removing: /var/run/dpdk/spdk_pid135720 00:31:20.985 Removing: /var/run/dpdk/spdk_pid135985 00:31:20.985 Removing: /var/run/dpdk/spdk_pid136254 00:31:20.985 Removing: /var/run/dpdk/spdk_pid136523 00:31:20.985 Removing: /var/run/dpdk/spdk_pid136801 00:31:20.985 Removing: /var/run/dpdk/spdk_pid137068 00:31:20.985 Removing: /var/run/dpdk/spdk_pid137315 00:31:20.985 Removing: /var/run/dpdk/spdk_pid137573 00:31:20.985 Removing: /var/run/dpdk/spdk_pid137818 00:31:20.985 Removing: /var/run/dpdk/spdk_pid138066 00:31:20.985 Removing: /var/run/dpdk/spdk_pid138320 00:31:20.985 Removing: /var/run/dpdk/spdk_pid138568 00:31:20.985 Removing: /var/run/dpdk/spdk_pid138824 00:31:20.985 Removing: /var/run/dpdk/spdk_pid139072 00:31:20.985 Removing: /var/run/dpdk/spdk_pid139321 00:31:20.985 Removing: /var/run/dpdk/spdk_pid139396 00:31:20.985 Removing: /var/run/dpdk/spdk_pid139823 00:31:20.985 Removing: /var/run/dpdk/spdk_pid143561 00:31:20.985 Removing: /var/run/dpdk/spdk_pid188380 00:31:20.985 Removing: /var/run/dpdk/spdk_pid192707 00:31:20.985 Removing: /var/run/dpdk/spdk_pid202679 00:31:20.985 Removing: /var/run/dpdk/spdk_pid208069 00:31:20.985 Removing: /var/run/dpdk/spdk_pid212055 00:31:20.985 Removing: /var/run/dpdk/spdk_pid212538 00:31:20.985 Removing: /var/run/dpdk/spdk_pid219264 00:31:20.985 Removing: /var/run/dpdk/spdk_pid225280 00:31:20.985 Removing: /var/run/dpdk/spdk_pid225302 00:31:20.985 Removing: /var/run/dpdk/spdk_pid226198 00:31:20.985 Removing: /var/run/dpdk/spdk_pid227113 00:31:20.985 Removing: /var/run/dpdk/spdk_pid228027 00:31:20.985 Removing: /var/run/dpdk/spdk_pid228493 00:31:20.985 Removing: /var/run/dpdk/spdk_pid228507 00:31:20.985 Removing: /var/run/dpdk/spdk_pid228771 00:31:20.985 Removing: /var/run/dpdk/spdk_pid228967 00:31:20.985 Removing: /var/run/dpdk/spdk_pid228969 00:31:20.985 Removing: /var/run/dpdk/spdk_pid229884 00:31:21.244 Removing: /var/run/dpdk/spdk_pid230794 00:31:21.244 Removing: /var/run/dpdk/spdk_pid231641 00:31:21.244 Removing: /var/run/dpdk/spdk_pid232306 00:31:21.244 Removing: /var/run/dpdk/spdk_pid232408 00:31:21.244 Removing: /var/run/dpdk/spdk_pid232636 00:31:21.244 Removing: /var/run/dpdk/spdk_pid233883 00:31:21.244 Removing: /var/run/dpdk/spdk_pid234882 00:31:21.244 Removing: /var/run/dpdk/spdk_pid243214 00:31:21.244 Removing: /var/run/dpdk/spdk_pid243669 00:31:21.244 Removing: /var/run/dpdk/spdk_pid247924 00:31:21.244 Removing: /var/run/dpdk/spdk_pid253585 00:31:21.244 Removing: /var/run/dpdk/spdk_pid256860 00:31:21.244 Removing: /var/run/dpdk/spdk_pid267339 00:31:21.244 Removing: /var/run/dpdk/spdk_pid276313 00:31:21.244 Removing: /var/run/dpdk/spdk_pid278136 00:31:21.244 Removing: /var/run/dpdk/spdk_pid279061 00:31:21.244 Removing: /var/run/dpdk/spdk_pid295869 00:31:21.244 Removing: /var/run/dpdk/spdk_pid299649 00:31:21.244 Removing: /var/run/dpdk/spdk_pid325765 00:31:21.244 Removing: /var/run/dpdk/spdk_pid330259 00:31:21.244 Removing: /var/run/dpdk/spdk_pid331860 00:31:21.244 Removing: /var/run/dpdk/spdk_pid333696 00:31:21.244 Removing: /var/run/dpdk/spdk_pid333938 00:31:21.244 Removing: /var/run/dpdk/spdk_pid334176 00:31:21.244 Removing: /var/run/dpdk/spdk_pid334408 00:31:21.244 Removing: /var/run/dpdk/spdk_pid334919 00:31:21.244 Removing: /var/run/dpdk/spdk_pid336762 00:31:21.244 Removing: /var/run/dpdk/spdk_pid337750 00:31:21.244 Removing: /var/run/dpdk/spdk_pid338252 00:31:21.244 Removing: /var/run/dpdk/spdk_pid341082 00:31:21.244 Removing: /var/run/dpdk/spdk_pid341653 00:31:21.244 Removing: /var/run/dpdk/spdk_pid342326 00:31:21.244 Removing: /var/run/dpdk/spdk_pid346578 00:31:21.244 Removing: /var/run/dpdk/spdk_pid356750 00:31:21.244 Removing: /var/run/dpdk/spdk_pid360700 00:31:21.244 Removing: /var/run/dpdk/spdk_pid366767 00:31:21.244 Removing: /var/run/dpdk/spdk_pid368077 00:31:21.244 Removing: /var/run/dpdk/spdk_pid369644 00:31:21.244 Removing: /var/run/dpdk/spdk_pid373947 00:31:21.244 Removing: /var/run/dpdk/spdk_pid378184 00:31:21.244 Removing: /var/run/dpdk/spdk_pid385569 00:31:21.244 Removing: /var/run/dpdk/spdk_pid385644 00:31:21.244 Removing: /var/run/dpdk/spdk_pid390792 00:31:21.244 Removing: /var/run/dpdk/spdk_pid391018 00:31:21.244 Removing: /var/run/dpdk/spdk_pid391245 00:31:21.244 Removing: /var/run/dpdk/spdk_pid391665 00:31:21.244 Removing: /var/run/dpdk/spdk_pid391711 00:31:21.244 Removing: /var/run/dpdk/spdk_pid396185 00:31:21.244 Removing: /var/run/dpdk/spdk_pid396762 00:31:21.244 Removing: /var/run/dpdk/spdk_pid401091 00:31:21.244 Removing: /var/run/dpdk/spdk_pid403844 00:31:21.244 Removing: /var/run/dpdk/spdk_pid409236 00:31:21.244 Removing: /var/run/dpdk/spdk_pid414630 00:31:21.244 Removing: /var/run/dpdk/spdk_pid423172 00:31:21.244 Removing: /var/run/dpdk/spdk_pid430300 00:31:21.244 Removing: /var/run/dpdk/spdk_pid430303 00:31:21.244 Removing: /var/run/dpdk/spdk_pid449161 00:31:21.244 Removing: /var/run/dpdk/spdk_pid449854 00:31:21.244 Removing: /var/run/dpdk/spdk_pid450542 00:31:21.244 Removing: /var/run/dpdk/spdk_pid451181 00:31:21.244 Removing: /var/run/dpdk/spdk_pid451997 00:31:21.244 Removing: /var/run/dpdk/spdk_pid452690 00:31:21.244 Removing: /var/run/dpdk/spdk_pid453388 00:31:21.244 Removing: /var/run/dpdk/spdk_pid453990 00:31:21.244 Removing: /var/run/dpdk/spdk_pid458295 00:31:21.244 Removing: /var/run/dpdk/spdk_pid458574 00:31:21.244 Removing: /var/run/dpdk/spdk_pid464476 00:31:21.244 Removing: /var/run/dpdk/spdk_pid464704 00:31:21.244 Removing: /var/run/dpdk/spdk_pid466925 00:31:21.503 Removing: /var/run/dpdk/spdk_pid474888 00:31:21.503 Removing: /var/run/dpdk/spdk_pid474893 00:31:21.503 Removing: /var/run/dpdk/spdk_pid480033 00:31:21.503 Removing: /var/run/dpdk/spdk_pid482402 00:31:21.503 Removing: /var/run/dpdk/spdk_pid484373 00:31:21.503 Removing: /var/run/dpdk/spdk_pid485418 00:31:21.503 Removing: /var/run/dpdk/spdk_pid487477 00:31:21.503 Removing: /var/run/dpdk/spdk_pid488668 00:31:21.503 Removing: /var/run/dpdk/spdk_pid497396 00:31:21.503 Removing: /var/run/dpdk/spdk_pid497872 00:31:21.503 Removing: /var/run/dpdk/spdk_pid498537 00:31:21.503 Removing: /var/run/dpdk/spdk_pid500818 00:31:21.503 Removing: /var/run/dpdk/spdk_pid501283 00:31:21.503 Removing: /var/run/dpdk/spdk_pid501753 00:31:21.503 Removing: /var/run/dpdk/spdk_pid505565 00:31:21.503 Removing: /var/run/dpdk/spdk_pid505659 00:31:21.503 Removing: /var/run/dpdk/spdk_pid507281 00:31:21.503 Removing: /var/run/dpdk/spdk_pid507641 00:31:21.503 Removing: /var/run/dpdk/spdk_pid507872 00:31:21.503 Clean 00:31:21.503 19:23:23 -- common/autotest_common.sh@1451 -- # return 0 00:31:21.503 19:23:23 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:31:21.503 19:23:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:21.503 19:23:23 -- common/autotest_common.sh@10 -- # set +x 00:31:21.503 19:23:23 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:31:21.503 19:23:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:21.503 19:23:23 -- common/autotest_common.sh@10 -- # set +x 00:31:21.503 19:23:24 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:21.503 19:23:24 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:31:21.503 19:23:24 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:31:21.503 19:23:24 -- spdk/autotest.sh@391 -- # hash lcov 00:31:21.503 19:23:24 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:21.503 19:23:24 -- spdk/autotest.sh@393 -- # hostname 00:31:21.503 19:23:24 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:31:21.761 geninfo: WARNING: invalid characters removed from testname! 00:31:43.718 19:23:44 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:44.285 19:23:46 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:46.189 19:23:48 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:48.095 19:23:50 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:50.001 19:23:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:51.905 19:23:54 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:53.812 19:23:55 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:53.812 19:23:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.812 19:23:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:53.812 19:23:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.812 19:23:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.812 19:23:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.812 19:23:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.812 19:23:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.812 19:23:55 -- paths/export.sh@5 -- $ export PATH 00:31:53.812 19:23:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.812 19:23:55 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:53.812 19:23:55 -- common/autobuild_common.sh@444 -- $ date +%s 00:31:53.812 19:23:56 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720805036.XXXXXX 00:31:53.812 19:23:56 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720805036.FQg1PA 00:31:53.812 19:23:56 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:31:53.812 19:23:56 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:31:53.812 19:23:56 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:53.812 19:23:56 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:53.812 19:23:56 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:53.812 19:23:56 -- common/autobuild_common.sh@460 -- $ get_config_params 00:31:53.812 19:23:56 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:53.812 19:23:56 -- common/autotest_common.sh@10 -- $ set +x 00:31:53.812 19:23:56 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:31:53.812 19:23:56 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:31:53.812 19:23:56 -- pm/common@17 -- $ local monitor 00:31:53.812 19:23:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:53.812 19:23:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:53.812 19:23:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:53.812 19:23:56 -- pm/common@21 -- $ date +%s 00:31:53.812 19:23:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:53.812 19:23:56 -- pm/common@21 -- $ date +%s 00:31:53.812 19:23:56 -- pm/common@25 -- $ sleep 1 00:31:53.812 19:23:56 -- pm/common@21 -- $ date +%s 00:31:53.812 19:23:56 -- pm/common@21 -- $ date +%s 00:31:53.812 19:23:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720805036 00:31:53.812 19:23:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720805036 00:31:53.812 19:23:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720805036 00:31:53.812 19:23:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720805036 00:31:53.812 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720805036_collect-vmstat.pm.log 00:31:53.812 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720805036_collect-cpu-load.pm.log 00:31:53.812 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720805036_collect-cpu-temp.pm.log 00:31:53.812 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720805036_collect-bmc-pm.bmc.pm.log 00:31:54.750 19:23:57 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:31:54.750 19:23:57 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:31:54.750 19:23:57 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:54.750 19:23:57 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:54.750 19:23:57 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:54.750 19:23:57 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:54.750 19:23:57 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:54.750 19:23:57 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:54.750 19:23:57 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:54.750 19:23:57 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:54.750 19:23:57 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:54.750 19:23:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:54.750 19:23:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:54.750 19:23:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:54.750 19:23:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:54.750 19:23:57 -- pm/common@44 -- $ pid=518166 00:31:54.750 19:23:57 -- pm/common@50 -- $ kill -TERM 518166 00:31:54.750 19:23:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:54.750 19:23:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:54.750 19:23:57 -- pm/common@44 -- $ pid=518167 00:31:54.750 19:23:57 -- pm/common@50 -- $ kill -TERM 518167 00:31:54.750 19:23:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:54.750 19:23:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:54.750 19:23:57 -- pm/common@44 -- $ pid=518169 00:31:54.750 19:23:57 -- pm/common@50 -- $ kill -TERM 518169 00:31:54.750 19:23:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:54.750 19:23:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:54.750 19:23:57 -- pm/common@44 -- $ pid=518192 00:31:54.750 19:23:57 -- pm/common@50 -- $ sudo -E kill -TERM 518192 00:31:54.750 + [[ -n 10180 ]] 00:31:54.750 + sudo kill 10180 00:31:54.761 [Pipeline] } 00:31:54.785 [Pipeline] // stage 00:31:54.791 [Pipeline] } 00:31:54.812 [Pipeline] // timeout 00:31:54.817 [Pipeline] } 00:31:54.833 [Pipeline] // catchError 00:31:54.839 [Pipeline] } 00:31:54.852 [Pipeline] // wrap 00:31:54.858 [Pipeline] } 00:31:54.871 [Pipeline] // catchError 00:31:54.879 [Pipeline] stage 00:31:54.881 [Pipeline] { (Epilogue) 00:31:54.893 [Pipeline] catchError 00:31:54.894 [Pipeline] { 00:31:54.908 [Pipeline] echo 00:31:54.910 Cleanup processes 00:31:54.916 [Pipeline] sh 00:31:55.206 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:55.206 518281 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:55.206 518567 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:55.221 [Pipeline] sh 00:31:55.507 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:55.507 ++ grep -v 'sudo pgrep' 00:31:55.507 ++ awk '{print $1}' 00:31:55.507 + sudo kill -9 518281 00:31:55.520 [Pipeline] sh 00:31:55.809 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:05.810 [Pipeline] sh 00:32:06.101 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:06.101 Artifacts sizes are good 00:32:06.118 [Pipeline] archiveArtifacts 00:32:06.125 Archiving artifacts 00:32:06.702 [Pipeline] sh 00:32:06.988 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:32:07.004 [Pipeline] cleanWs 00:32:07.014 [WS-CLEANUP] Deleting project workspace... 00:32:07.014 [WS-CLEANUP] Deferred wipeout is used... 00:32:07.021 [WS-CLEANUP] done 00:32:07.023 [Pipeline] } 00:32:07.044 [Pipeline] // catchError 00:32:07.057 [Pipeline] sh 00:32:07.338 + logger -p user.info -t JENKINS-CI 00:32:07.349 [Pipeline] } 00:32:07.368 [Pipeline] // stage 00:32:07.374 [Pipeline] } 00:32:07.392 [Pipeline] // node 00:32:07.398 [Pipeline] End of Pipeline 00:32:07.432 Finished: SUCCESS